00:00:00.000 Started by upstream project "autotest-per-patch" build number 132776 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.127 using credential 00000000-0000-0000-0000-000000000002 00:00:00.129 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.190 Fetching changes from the remote Git repository 00:00:00.193 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.241 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.277 > git --version # 'git version 2.39.2' 00:00:00.277 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.303 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.303 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.388 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.400 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.411 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.411 > git config core.sparsecheckout # timeout=10 00:00:06.422 > git read-tree -mu HEAD # timeout=10 00:00:06.437 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.459 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.459 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.548 [Pipeline] Start of Pipeline 00:00:06.562 [Pipeline] library 00:00:06.564 Loading library shm_lib@master 00:00:06.564 Library shm_lib@master is cached. Copying from home. 00:00:06.585 [Pipeline] node 00:55:30.927 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:55:30.929 [Pipeline] { 00:55:30.940 [Pipeline] catchError 00:55:30.941 [Pipeline] { 00:55:30.954 [Pipeline] wrap 00:55:30.964 [Pipeline] { 00:55:30.974 [Pipeline] stage 00:55:30.975 [Pipeline] { (Prologue) 00:55:30.994 [Pipeline] echo 00:55:30.996 Node: VM-host-WFP1 00:55:31.002 [Pipeline] cleanWs 00:55:31.012 [WS-CLEANUP] Deleting project workspace... 00:55:31.012 [WS-CLEANUP] Deferred wipeout is used... 00:55:31.018 [WS-CLEANUP] done 00:55:31.248 [Pipeline] setCustomBuildProperty 00:55:31.353 [Pipeline] httpRequest 00:55:31.766 [Pipeline] echo 00:55:31.768 Sorcerer 10.211.164.101 is alive 00:55:31.779 [Pipeline] retry 00:55:31.781 [Pipeline] { 00:55:31.796 [Pipeline] httpRequest 00:55:31.801 HttpMethod: GET 00:55:31.801 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:55:31.802 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:55:31.803 Response Code: HTTP/1.1 200 OK 00:55:31.804 Success: Status code 200 is in the accepted range: 200,404 00:55:31.804 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:55:31.949 [Pipeline] } 00:55:31.967 [Pipeline] // retry 00:55:31.976 [Pipeline] sh 00:55:32.263 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:55:32.282 [Pipeline] httpRequest 00:55:32.683 [Pipeline] echo 00:55:32.685 Sorcerer 10.211.164.101 is alive 00:55:32.698 [Pipeline] retry 00:55:32.701 [Pipeline] { 00:55:32.718 [Pipeline] httpRequest 00:55:32.724 HttpMethod: GET 00:55:32.724 URL: http://10.211.164.101/packages/spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:55:32.725 Sending request to url: http://10.211.164.101/packages/spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:55:32.726 Response Code: HTTP/1.1 200 OK 00:55:32.726 Success: Status code 200 is in the accepted range: 200,404 00:55:32.727 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:55:34.996 [Pipeline] } 00:55:35.018 [Pipeline] // retry 00:55:35.027 [Pipeline] sh 00:55:35.358 + tar --no-same-owner -xf spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:55:37.924 [Pipeline] sh 00:55:38.204 + git -C spdk log --oneline -n5 00:55:38.204 15ce1ba92 lib/reduce: Send unmap to backing dev 00:55:38.204 5f032e8b7 lib/reduce: Write Zero to partial chunk when unmapping the chunks. 00:55:38.204 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:55:38.204 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:55:38.204 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:55:38.219 [Pipeline] writeFile 00:55:38.232 [Pipeline] sh 00:55:38.562 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:55:38.572 [Pipeline] sh 00:55:38.848 + cat autorun-spdk.conf 00:55:38.848 SPDK_RUN_FUNCTIONAL_TEST=1 00:55:38.848 SPDK_TEST_NVMF=1 00:55:38.848 SPDK_TEST_NVMF_TRANSPORT=tcp 00:55:38.848 SPDK_TEST_URING=1 00:55:38.848 SPDK_TEST_USDT=1 00:55:38.848 SPDK_RUN_UBSAN=1 00:55:38.848 NET_TYPE=virt 00:55:38.848 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:55:38.855 RUN_NIGHTLY=0 00:55:38.856 [Pipeline] } 00:55:38.869 [Pipeline] // stage 00:55:38.880 [Pipeline] stage 00:55:38.882 [Pipeline] { (Run VM) 00:55:38.892 [Pipeline] sh 00:55:39.171 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:55:39.171 + echo 'Start stage prepare_nvme.sh' 00:55:39.171 Start stage prepare_nvme.sh 00:55:39.171 + [[ -n 2 ]] 00:55:39.171 + disk_prefix=ex2 00:55:39.171 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:55:39.171 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:55:39.171 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:55:39.171 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:55:39.171 ++ SPDK_TEST_NVMF=1 00:55:39.171 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:55:39.171 ++ SPDK_TEST_URING=1 00:55:39.171 ++ SPDK_TEST_USDT=1 00:55:39.171 ++ SPDK_RUN_UBSAN=1 00:55:39.171 ++ NET_TYPE=virt 00:55:39.171 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:55:39.171 ++ RUN_NIGHTLY=0 00:55:39.171 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:55:39.171 + nvme_files=() 00:55:39.171 + declare -A nvme_files 00:55:39.171 + backend_dir=/var/lib/libvirt/images/backends 00:55:39.171 + nvme_files['nvme.img']=5G 00:55:39.171 + nvme_files['nvme-cmb.img']=5G 00:55:39.171 + nvme_files['nvme-multi0.img']=4G 00:55:39.171 + nvme_files['nvme-multi1.img']=4G 00:55:39.171 + nvme_files['nvme-multi2.img']=4G 00:55:39.171 + nvme_files['nvme-openstack.img']=8G 00:55:39.171 + nvme_files['nvme-zns.img']=5G 00:55:39.171 + (( SPDK_TEST_NVME_PMR == 1 )) 00:55:39.171 + (( SPDK_TEST_FTL == 1 )) 00:55:39.171 + (( SPDK_TEST_NVME_FDP == 1 )) 00:55:39.171 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:55:39.171 + for nvme in "${!nvme_files[@]}" 00:55:39.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:55:39.171 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:55:39.171 + for nvme in "${!nvme_files[@]}" 00:55:39.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:55:39.171 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:55:39.171 + for nvme in "${!nvme_files[@]}" 00:55:39.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:55:39.171 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:55:39.171 + for nvme in "${!nvme_files[@]}" 00:55:39.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:55:39.171 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:55:39.171 + for nvme in "${!nvme_files[@]}" 00:55:39.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:55:39.171 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:55:39.171 + for nvme in "${!nvme_files[@]}" 00:55:39.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:55:39.430 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:55:39.430 + for nvme in "${!nvme_files[@]}" 00:55:39.430 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:55:39.430 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:55:39.430 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:55:39.430 + echo 'End stage prepare_nvme.sh' 00:55:39.430 End stage prepare_nvme.sh 00:55:39.439 [Pipeline] sh 00:55:39.727 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:55:39.727 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:55:39.727 00:55:39.727 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:55:39.727 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:55:39.727 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:55:39.727 HELP=0 00:55:39.727 DRY_RUN=0 00:55:39.727 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:55:39.727 NVME_DISKS_TYPE=nvme,nvme, 00:55:39.727 NVME_AUTO_CREATE=0 00:55:39.727 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:55:39.727 NVME_CMB=,, 00:55:39.727 NVME_PMR=,, 00:55:39.727 NVME_ZNS=,, 00:55:39.727 NVME_MS=,, 00:55:39.727 NVME_FDP=,, 00:55:39.727 SPDK_VAGRANT_DISTRO=fedora39 00:55:39.727 SPDK_VAGRANT_VMCPU=10 00:55:39.727 SPDK_VAGRANT_VMRAM=12288 00:55:39.727 SPDK_VAGRANT_PROVIDER=libvirt 00:55:39.727 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:55:39.727 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:55:39.727 SPDK_OPENSTACK_NETWORK=0 00:55:39.727 VAGRANT_PACKAGE_BOX=0 00:55:39.727 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:55:39.727 FORCE_DISTRO=true 00:55:39.727 VAGRANT_BOX_VERSION= 00:55:39.727 EXTRA_VAGRANTFILES= 00:55:39.727 NIC_MODEL=e1000 00:55:39.727 00:55:39.727 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:55:39.727 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:55:42.256 Bringing machine 'default' up with 'libvirt' provider... 00:55:43.193 ==> default: Creating image (snapshot of base box volume). 00:55:43.193 ==> default: Creating domain with the following settings... 00:55:43.193 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733723676_ed5abc68507bbf6a964e 00:55:43.193 ==> default: -- Domain type: kvm 00:55:43.193 ==> default: -- Cpus: 10 00:55:43.193 ==> default: -- Feature: acpi 00:55:43.193 ==> default: -- Feature: apic 00:55:43.193 ==> default: -- Feature: pae 00:55:43.193 ==> default: -- Memory: 12288M 00:55:43.193 ==> default: -- Memory Backing: hugepages: 00:55:43.193 ==> default: -- Management MAC: 00:55:43.193 ==> default: -- Loader: 00:55:43.193 ==> default: -- Nvram: 00:55:43.193 ==> default: -- Base box: spdk/fedora39 00:55:43.193 ==> default: -- Storage pool: default 00:55:43.193 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733723676_ed5abc68507bbf6a964e.img (20G) 00:55:43.193 ==> default: -- Volume Cache: default 00:55:43.193 ==> default: -- Kernel: 00:55:43.193 ==> default: -- Initrd: 00:55:43.193 ==> default: -- Graphics Type: vnc 00:55:43.193 ==> default: -- Graphics Port: -1 00:55:43.193 ==> default: -- Graphics IP: 127.0.0.1 00:55:43.193 ==> default: -- Graphics Password: Not defined 00:55:43.193 ==> default: -- Video Type: cirrus 00:55:43.193 ==> default: -- Video VRAM: 9216 00:55:43.193 ==> default: -- Sound Type: 00:55:43.193 ==> default: -- Keymap: en-us 00:55:43.193 ==> default: -- TPM Path: 00:55:43.193 ==> default: -- INPUT: type=mouse, bus=ps2 00:55:43.193 ==> default: -- Command line args: 00:55:43.193 ==> default: -> value=-device, 00:55:43.193 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:55:43.193 ==> default: -> value=-drive, 00:55:43.193 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:55:43.193 ==> default: -> value=-device, 00:55:43.193 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:55:43.193 ==> default: -> value=-device, 00:55:43.193 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:55:43.193 ==> default: -> value=-drive, 00:55:43.193 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:55:43.193 ==> default: -> value=-device, 00:55:43.194 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:55:43.194 ==> default: -> value=-drive, 00:55:43.194 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:55:43.194 ==> default: -> value=-device, 00:55:43.194 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:55:43.194 ==> default: -> value=-drive, 00:55:43.194 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:55:43.194 ==> default: -> value=-device, 00:55:43.194 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:55:43.762 ==> default: Creating shared folders metadata... 00:55:43.762 ==> default: Starting domain. 00:55:45.668 ==> default: Waiting for domain to get an IP address... 00:56:03.806 ==> default: Waiting for SSH to become available... 00:56:03.806 ==> default: Configuring and enabling network interfaces... 00:56:07.999 default: SSH address: 192.168.121.123:22 00:56:07.999 default: SSH username: vagrant 00:56:07.999 default: SSH auth method: private key 00:56:11.293 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:56:19.406 ==> default: Mounting SSHFS shared folder... 00:56:21.979 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:56:21.979 ==> default: Checking Mount.. 00:56:23.353 ==> default: Folder Successfully Mounted! 00:56:23.353 ==> default: Running provisioner: file... 00:56:24.731 default: ~/.gitconfig => .gitconfig 00:56:24.991 00:56:24.991 SUCCESS! 00:56:24.991 00:56:24.991 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:56:24.991 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:56:24.991 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:56:24.991 00:56:25.002 [Pipeline] } 00:56:25.019 [Pipeline] // stage 00:56:25.030 [Pipeline] dir 00:56:25.032 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:56:25.034 [Pipeline] { 00:56:25.048 [Pipeline] catchError 00:56:25.050 [Pipeline] { 00:56:25.063 [Pipeline] sh 00:56:25.345 + vagrant ssh-config --host vagrant 00:56:25.345 + sed -ne /^Host/,$p 00:56:25.345 + tee ssh_conf 00:56:27.897 Host vagrant 00:56:27.897 HostName 192.168.121.123 00:56:27.897 User vagrant 00:56:27.897 Port 22 00:56:27.897 UserKnownHostsFile /dev/null 00:56:27.897 StrictHostKeyChecking no 00:56:27.897 PasswordAuthentication no 00:56:27.897 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:56:27.897 IdentitiesOnly yes 00:56:27.897 LogLevel FATAL 00:56:27.897 ForwardAgent yes 00:56:27.897 ForwardX11 yes 00:56:27.897 00:56:27.910 [Pipeline] withEnv 00:56:27.912 [Pipeline] { 00:56:27.925 [Pipeline] sh 00:56:28.203 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:56:28.203 source /etc/os-release 00:56:28.203 [[ -e /image.version ]] && img=$(< /image.version) 00:56:28.203 # Minimal, systemd-like check. 00:56:28.203 if [[ -e /.dockerenv ]]; then 00:56:28.203 # Clear garbage from the node's name: 00:56:28.203 # agt-er_autotest_547-896 -> autotest_547-896 00:56:28.203 # $HOSTNAME is the actual container id 00:56:28.203 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:56:28.203 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:56:28.203 # We can assume this is a mount from a host where container is running, 00:56:28.203 # so fetch its hostname to easily identify the target swarm worker. 00:56:28.203 container="$(< /etc/hostname) ($agent)" 00:56:28.203 else 00:56:28.203 # Fallback 00:56:28.203 container=$agent 00:56:28.203 fi 00:56:28.203 fi 00:56:28.203 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:56:28.203 00:56:28.471 [Pipeline] } 00:56:28.485 [Pipeline] // withEnv 00:56:28.494 [Pipeline] setCustomBuildProperty 00:56:28.508 [Pipeline] stage 00:56:28.511 [Pipeline] { (Tests) 00:56:28.529 [Pipeline] sh 00:56:28.811 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:56:29.081 [Pipeline] sh 00:56:29.360 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:56:29.629 [Pipeline] timeout 00:56:29.629 Timeout set to expire in 1 hr 0 min 00:56:29.630 [Pipeline] { 00:56:29.642 [Pipeline] sh 00:56:29.921 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:56:30.487 HEAD is now at 15ce1ba92 lib/reduce: Send unmap to backing dev 00:56:30.499 [Pipeline] sh 00:56:30.785 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:56:31.055 [Pipeline] sh 00:56:31.333 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:56:31.606 [Pipeline] sh 00:56:31.885 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:56:32.143 ++ readlink -f spdk_repo 00:56:32.143 + DIR_ROOT=/home/vagrant/spdk_repo 00:56:32.143 + [[ -n /home/vagrant/spdk_repo ]] 00:56:32.143 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:56:32.143 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:56:32.143 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:56:32.143 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:56:32.143 + [[ -d /home/vagrant/spdk_repo/output ]] 00:56:32.143 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:56:32.143 + cd /home/vagrant/spdk_repo 00:56:32.143 + source /etc/os-release 00:56:32.143 ++ NAME='Fedora Linux' 00:56:32.143 ++ VERSION='39 (Cloud Edition)' 00:56:32.143 ++ ID=fedora 00:56:32.143 ++ VERSION_ID=39 00:56:32.143 ++ VERSION_CODENAME= 00:56:32.143 ++ PLATFORM_ID=platform:f39 00:56:32.143 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:56:32.143 ++ ANSI_COLOR='0;38;2;60;110;180' 00:56:32.143 ++ LOGO=fedora-logo-icon 00:56:32.143 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:56:32.143 ++ HOME_URL=https://fedoraproject.org/ 00:56:32.143 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:56:32.143 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:56:32.143 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:56:32.143 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:56:32.143 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:56:32.143 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:56:32.143 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:56:32.143 ++ SUPPORT_END=2024-11-12 00:56:32.143 ++ VARIANT='Cloud Edition' 00:56:32.143 ++ VARIANT_ID=cloud 00:56:32.143 + uname -a 00:56:32.143 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:56:32.143 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:56:32.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:56:32.710 Hugepages 00:56:32.710 node hugesize free / total 00:56:32.710 node0 1048576kB 0 / 0 00:56:32.710 node0 2048kB 0 / 0 00:56:32.710 00:56:32.710 Type BDF Vendor Device NUMA Driver Device Block devices 00:56:32.710 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:56:32.710 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:56:32.710 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:56:32.710 + rm -f /tmp/spdk-ld-path 00:56:32.710 + source autorun-spdk.conf 00:56:32.710 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:56:32.710 ++ SPDK_TEST_NVMF=1 00:56:32.710 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:56:32.710 ++ SPDK_TEST_URING=1 00:56:32.710 ++ SPDK_TEST_USDT=1 00:56:32.710 ++ SPDK_RUN_UBSAN=1 00:56:32.710 ++ NET_TYPE=virt 00:56:32.710 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:56:32.710 ++ RUN_NIGHTLY=0 00:56:32.710 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:56:32.710 + [[ -n '' ]] 00:56:32.710 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:56:32.710 + for M in /var/spdk/build-*-manifest.txt 00:56:32.710 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:56:32.710 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:56:32.969 + for M in /var/spdk/build-*-manifest.txt 00:56:32.969 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:56:32.969 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:56:32.969 + for M in /var/spdk/build-*-manifest.txt 00:56:32.969 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:56:32.969 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:56:32.969 ++ uname 00:56:32.969 + [[ Linux == \L\i\n\u\x ]] 00:56:32.969 + sudo dmesg -T 00:56:32.969 + sudo dmesg --clear 00:56:32.969 + dmesg_pid=5201 00:56:32.969 + [[ Fedora Linux == FreeBSD ]] 00:56:32.969 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:56:32.969 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:56:32.969 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:56:32.969 + sudo dmesg -Tw 00:56:32.969 + [[ -x /usr/src/fio-static/fio ]] 00:56:32.969 + export FIO_BIN=/usr/src/fio-static/fio 00:56:32.969 + FIO_BIN=/usr/src/fio-static/fio 00:56:32.969 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:56:32.969 + [[ ! -v VFIO_QEMU_BIN ]] 00:56:32.969 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:56:32.969 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:56:32.969 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:56:32.969 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:56:32.969 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:56:32.969 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:56:32.969 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:56:32.969 05:55:27 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:56:32.969 05:55:27 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:56:32.969 05:55:27 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:56:32.969 05:55:27 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:56:32.969 05:55:27 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:56:32.969 05:55:27 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:56:32.969 05:55:27 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:56:32.969 05:55:27 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:56:32.969 05:55:27 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:56:32.969 05:55:27 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:56:32.969 05:55:27 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:56:32.969 05:55:27 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:56:32.969 05:55:27 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:56:33.242 05:55:27 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:56:33.242 05:55:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:33.242 05:55:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:56:33.242 05:55:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:56:33.242 05:55:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:33.242 05:55:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:33.242 05:55:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.242 05:55:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.242 05:55:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.242 05:55:27 -- paths/export.sh@5 -- $ export PATH 00:56:33.242 05:55:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.242 05:55:27 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:56:33.242 05:55:27 -- common/autobuild_common.sh@493 -- $ date +%s 00:56:33.242 05:55:27 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733723727.XXXXXX 00:56:33.242 05:55:27 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733723727.PYZyMy 00:56:33.242 05:55:27 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:56:33.242 05:55:27 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:56:33.242 05:55:27 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:56:33.242 05:55:27 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:56:33.242 05:55:27 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:56:33.242 05:55:27 -- common/autobuild_common.sh@509 -- $ get_config_params 00:56:33.242 05:55:27 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:56:33.242 05:55:27 -- common/autotest_common.sh@10 -- $ set +x 00:56:33.242 05:55:27 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:56:33.242 05:55:27 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:56:33.242 05:55:27 -- pm/common@17 -- $ local monitor 00:56:33.242 05:55:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:56:33.242 05:55:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:56:33.242 05:55:27 -- pm/common@25 -- $ sleep 1 00:56:33.242 05:55:27 -- pm/common@21 -- $ date +%s 00:56:33.242 05:55:27 -- pm/common@21 -- $ date +%s 00:56:33.242 05:55:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733723727 00:56:33.242 05:55:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733723727 00:56:33.242 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733723727_collect-cpu-load.pm.log 00:56:33.242 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733723727_collect-vmstat.pm.log 00:56:34.206 05:55:28 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:56:34.206 05:55:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:56:34.206 05:55:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:56:34.206 05:55:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:56:34.206 05:55:28 -- spdk/autobuild.sh@16 -- $ date -u 00:56:34.206 Mon Dec 9 05:55:28 AM UTC 2024 00:56:34.206 05:55:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:56:34.206 v25.01-pre-305-g15ce1ba92 00:56:34.206 05:55:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:56:34.206 05:55:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:56:34.206 05:55:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:56:34.206 05:55:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:56:34.206 05:55:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:56:34.206 05:55:28 -- common/autotest_common.sh@10 -- $ set +x 00:56:34.206 ************************************ 00:56:34.206 START TEST ubsan 00:56:34.207 ************************************ 00:56:34.207 using ubsan 00:56:34.207 05:55:28 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:56:34.207 00:56:34.207 real 0m0.001s 00:56:34.207 user 0m0.000s 00:56:34.207 sys 0m0.000s 00:56:34.207 05:55:28 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:56:34.207 05:55:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:56:34.207 ************************************ 00:56:34.207 END TEST ubsan 00:56:34.207 ************************************ 00:56:34.207 05:55:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:56:34.207 05:55:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:56:34.207 05:55:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:56:34.207 05:55:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:56:34.207 05:55:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:56:34.207 05:55:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:56:34.207 05:55:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:56:34.207 05:55:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:56:34.207 05:55:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:56:34.466 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:56:34.466 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:56:35.032 Using 'verbs' RDMA provider 00:56:50.850 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:57:08.931 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:57:08.931 Creating mk/config.mk...done. 00:57:08.931 Creating mk/cc.flags.mk...done. 00:57:08.931 Type 'make' to build. 00:57:08.931 05:56:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:57:08.931 05:56:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:57:08.931 05:56:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:57:08.931 05:56:01 -- common/autotest_common.sh@10 -- $ set +x 00:57:08.931 ************************************ 00:57:08.931 START TEST make 00:57:08.931 ************************************ 00:57:08.931 05:56:01 make -- common/autotest_common.sh@1129 -- $ make -j10 00:57:08.931 make[1]: Nothing to be done for 'all'. 00:57:17.068 The Meson build system 00:57:17.068 Version: 1.5.0 00:57:17.068 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:57:17.068 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:57:17.068 Build type: native build 00:57:17.068 Program cat found: YES (/usr/bin/cat) 00:57:17.068 Project name: DPDK 00:57:17.068 Project version: 24.03.0 00:57:17.068 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:57:17.068 C linker for the host machine: cc ld.bfd 2.40-14 00:57:17.068 Host machine cpu family: x86_64 00:57:17.068 Host machine cpu: x86_64 00:57:17.068 Message: ## Building in Developer Mode ## 00:57:17.068 Program pkg-config found: YES (/usr/bin/pkg-config) 00:57:17.068 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:57:17.068 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:57:17.068 Program python3 found: YES (/usr/bin/python3) 00:57:17.068 Program cat found: YES (/usr/bin/cat) 00:57:17.068 Compiler for C supports arguments -march=native: YES 00:57:17.068 Checking for size of "void *" : 8 00:57:17.068 Checking for size of "void *" : 8 (cached) 00:57:17.068 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:57:17.068 Library m found: YES 00:57:17.068 Library numa found: YES 00:57:17.068 Has header "numaif.h" : YES 00:57:17.068 Library fdt found: NO 00:57:17.068 Library execinfo found: NO 00:57:17.068 Has header "execinfo.h" : YES 00:57:17.068 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:57:17.068 Run-time dependency libarchive found: NO (tried pkgconfig) 00:57:17.068 Run-time dependency libbsd found: NO (tried pkgconfig) 00:57:17.068 Run-time dependency jansson found: NO (tried pkgconfig) 00:57:17.068 Run-time dependency openssl found: YES 3.1.1 00:57:17.068 Run-time dependency libpcap found: YES 1.10.4 00:57:17.068 Has header "pcap.h" with dependency libpcap: YES 00:57:17.068 Compiler for C supports arguments -Wcast-qual: YES 00:57:17.068 Compiler for C supports arguments -Wdeprecated: YES 00:57:17.068 Compiler for C supports arguments -Wformat: YES 00:57:17.068 Compiler for C supports arguments -Wformat-nonliteral: NO 00:57:17.068 Compiler for C supports arguments -Wformat-security: NO 00:57:17.068 Compiler for C supports arguments -Wmissing-declarations: YES 00:57:17.068 Compiler for C supports arguments -Wmissing-prototypes: YES 00:57:17.068 Compiler for C supports arguments -Wnested-externs: YES 00:57:17.068 Compiler for C supports arguments -Wold-style-definition: YES 00:57:17.068 Compiler for C supports arguments -Wpointer-arith: YES 00:57:17.068 Compiler for C supports arguments -Wsign-compare: YES 00:57:17.068 Compiler for C supports arguments -Wstrict-prototypes: YES 00:57:17.068 Compiler for C supports arguments -Wundef: YES 00:57:17.068 Compiler for C supports arguments -Wwrite-strings: YES 00:57:17.068 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:57:17.068 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:57:17.068 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:57:17.068 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:57:17.068 Program objdump found: YES (/usr/bin/objdump) 00:57:17.068 Compiler for C supports arguments -mavx512f: YES 00:57:17.068 Checking if "AVX512 checking" compiles: YES 00:57:17.068 Fetching value of define "__SSE4_2__" : 1 00:57:17.068 Fetching value of define "__AES__" : 1 00:57:17.068 Fetching value of define "__AVX__" : 1 00:57:17.068 Fetching value of define "__AVX2__" : 1 00:57:17.068 Fetching value of define "__AVX512BW__" : 1 00:57:17.068 Fetching value of define "__AVX512CD__" : 1 00:57:17.068 Fetching value of define "__AVX512DQ__" : 1 00:57:17.068 Fetching value of define "__AVX512F__" : 1 00:57:17.068 Fetching value of define "__AVX512VL__" : 1 00:57:17.068 Fetching value of define "__PCLMUL__" : 1 00:57:17.068 Fetching value of define "__RDRND__" : 1 00:57:17.068 Fetching value of define "__RDSEED__" : 1 00:57:17.068 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:57:17.068 Fetching value of define "__znver1__" : (undefined) 00:57:17.068 Fetching value of define "__znver2__" : (undefined) 00:57:17.068 Fetching value of define "__znver3__" : (undefined) 00:57:17.068 Fetching value of define "__znver4__" : (undefined) 00:57:17.068 Compiler for C supports arguments -Wno-format-truncation: YES 00:57:17.068 Message: lib/log: Defining dependency "log" 00:57:17.068 Message: lib/kvargs: Defining dependency "kvargs" 00:57:17.068 Message: lib/telemetry: Defining dependency "telemetry" 00:57:17.068 Checking for function "getentropy" : NO 00:57:17.068 Message: lib/eal: Defining dependency "eal" 00:57:17.068 Message: lib/ring: Defining dependency "ring" 00:57:17.068 Message: lib/rcu: Defining dependency "rcu" 00:57:17.068 Message: lib/mempool: Defining dependency "mempool" 00:57:17.068 Message: lib/mbuf: Defining dependency "mbuf" 00:57:17.068 Fetching value of define "__PCLMUL__" : 1 (cached) 00:57:17.068 Fetching value of define "__AVX512F__" : 1 (cached) 00:57:17.068 Fetching value of define "__AVX512BW__" : 1 (cached) 00:57:17.068 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:57:17.068 Fetching value of define "__AVX512VL__" : 1 (cached) 00:57:17.068 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:57:17.068 Compiler for C supports arguments -mpclmul: YES 00:57:17.068 Compiler for C supports arguments -maes: YES 00:57:17.068 Compiler for C supports arguments -mavx512f: YES (cached) 00:57:17.068 Compiler for C supports arguments -mavx512bw: YES 00:57:17.068 Compiler for C supports arguments -mavx512dq: YES 00:57:17.068 Compiler for C supports arguments -mavx512vl: YES 00:57:17.068 Compiler for C supports arguments -mvpclmulqdq: YES 00:57:17.068 Compiler for C supports arguments -mavx2: YES 00:57:17.068 Compiler for C supports arguments -mavx: YES 00:57:17.068 Message: lib/net: Defining dependency "net" 00:57:17.068 Message: lib/meter: Defining dependency "meter" 00:57:17.068 Message: lib/ethdev: Defining dependency "ethdev" 00:57:17.068 Message: lib/pci: Defining dependency "pci" 00:57:17.068 Message: lib/cmdline: Defining dependency "cmdline" 00:57:17.068 Message: lib/hash: Defining dependency "hash" 00:57:17.068 Message: lib/timer: Defining dependency "timer" 00:57:17.068 Message: lib/compressdev: Defining dependency "compressdev" 00:57:17.068 Message: lib/cryptodev: Defining dependency "cryptodev" 00:57:17.068 Message: lib/dmadev: Defining dependency "dmadev" 00:57:17.068 Compiler for C supports arguments -Wno-cast-qual: YES 00:57:17.069 Message: lib/power: Defining dependency "power" 00:57:17.069 Message: lib/reorder: Defining dependency "reorder" 00:57:17.069 Message: lib/security: Defining dependency "security" 00:57:17.069 Has header "linux/userfaultfd.h" : YES 00:57:17.069 Has header "linux/vduse.h" : YES 00:57:17.069 Message: lib/vhost: Defining dependency "vhost" 00:57:17.069 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:57:17.069 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:57:17.069 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:57:17.069 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:57:17.069 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:57:17.069 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:57:17.069 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:57:17.069 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:57:17.069 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:57:17.069 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:57:17.069 Program doxygen found: YES (/usr/local/bin/doxygen) 00:57:17.069 Configuring doxy-api-html.conf using configuration 00:57:17.069 Configuring doxy-api-man.conf using configuration 00:57:17.069 Program mandb found: YES (/usr/bin/mandb) 00:57:17.069 Program sphinx-build found: NO 00:57:17.069 Configuring rte_build_config.h using configuration 00:57:17.069 Message: 00:57:17.069 ================= 00:57:17.069 Applications Enabled 00:57:17.069 ================= 00:57:17.069 00:57:17.069 apps: 00:57:17.069 00:57:17.069 00:57:17.069 Message: 00:57:17.069 ================= 00:57:17.069 Libraries Enabled 00:57:17.069 ================= 00:57:17.069 00:57:17.069 libs: 00:57:17.069 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:57:17.069 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:57:17.069 cryptodev, dmadev, power, reorder, security, vhost, 00:57:17.069 00:57:17.069 Message: 00:57:17.069 =============== 00:57:17.069 Drivers Enabled 00:57:17.069 =============== 00:57:17.069 00:57:17.069 common: 00:57:17.069 00:57:17.069 bus: 00:57:17.069 pci, vdev, 00:57:17.069 mempool: 00:57:17.069 ring, 00:57:17.069 dma: 00:57:17.069 00:57:17.069 net: 00:57:17.069 00:57:17.069 crypto: 00:57:17.069 00:57:17.069 compress: 00:57:17.069 00:57:17.069 vdpa: 00:57:17.069 00:57:17.069 00:57:17.069 Message: 00:57:17.069 ================= 00:57:17.069 Content Skipped 00:57:17.069 ================= 00:57:17.069 00:57:17.069 apps: 00:57:17.069 dumpcap: explicitly disabled via build config 00:57:17.069 graph: explicitly disabled via build config 00:57:17.069 pdump: explicitly disabled via build config 00:57:17.069 proc-info: explicitly disabled via build config 00:57:17.069 test-acl: explicitly disabled via build config 00:57:17.069 test-bbdev: explicitly disabled via build config 00:57:17.069 test-cmdline: explicitly disabled via build config 00:57:17.069 test-compress-perf: explicitly disabled via build config 00:57:17.069 test-crypto-perf: explicitly disabled via build config 00:57:17.069 test-dma-perf: explicitly disabled via build config 00:57:17.069 test-eventdev: explicitly disabled via build config 00:57:17.069 test-fib: explicitly disabled via build config 00:57:17.069 test-flow-perf: explicitly disabled via build config 00:57:17.069 test-gpudev: explicitly disabled via build config 00:57:17.069 test-mldev: explicitly disabled via build config 00:57:17.069 test-pipeline: explicitly disabled via build config 00:57:17.069 test-pmd: explicitly disabled via build config 00:57:17.069 test-regex: explicitly disabled via build config 00:57:17.069 test-sad: explicitly disabled via build config 00:57:17.069 test-security-perf: explicitly disabled via build config 00:57:17.069 00:57:17.069 libs: 00:57:17.069 argparse: explicitly disabled via build config 00:57:17.069 metrics: explicitly disabled via build config 00:57:17.069 acl: explicitly disabled via build config 00:57:17.069 bbdev: explicitly disabled via build config 00:57:17.069 bitratestats: explicitly disabled via build config 00:57:17.069 bpf: explicitly disabled via build config 00:57:17.069 cfgfile: explicitly disabled via build config 00:57:17.069 distributor: explicitly disabled via build config 00:57:17.069 efd: explicitly disabled via build config 00:57:17.069 eventdev: explicitly disabled via build config 00:57:17.069 dispatcher: explicitly disabled via build config 00:57:17.069 gpudev: explicitly disabled via build config 00:57:17.069 gro: explicitly disabled via build config 00:57:17.069 gso: explicitly disabled via build config 00:57:17.069 ip_frag: explicitly disabled via build config 00:57:17.069 jobstats: explicitly disabled via build config 00:57:17.069 latencystats: explicitly disabled via build config 00:57:17.069 lpm: explicitly disabled via build config 00:57:17.069 member: explicitly disabled via build config 00:57:17.069 pcapng: explicitly disabled via build config 00:57:17.069 rawdev: explicitly disabled via build config 00:57:17.069 regexdev: explicitly disabled via build config 00:57:17.069 mldev: explicitly disabled via build config 00:57:17.069 rib: explicitly disabled via build config 00:57:17.069 sched: explicitly disabled via build config 00:57:17.069 stack: explicitly disabled via build config 00:57:17.069 ipsec: explicitly disabled via build config 00:57:17.069 pdcp: explicitly disabled via build config 00:57:17.069 fib: explicitly disabled via build config 00:57:17.069 port: explicitly disabled via build config 00:57:17.069 pdump: explicitly disabled via build config 00:57:17.069 table: explicitly disabled via build config 00:57:17.069 pipeline: explicitly disabled via build config 00:57:17.069 graph: explicitly disabled via build config 00:57:17.069 node: explicitly disabled via build config 00:57:17.069 00:57:17.069 drivers: 00:57:17.069 common/cpt: not in enabled drivers build config 00:57:17.069 common/dpaax: not in enabled drivers build config 00:57:17.069 common/iavf: not in enabled drivers build config 00:57:17.069 common/idpf: not in enabled drivers build config 00:57:17.069 common/ionic: not in enabled drivers build config 00:57:17.069 common/mvep: not in enabled drivers build config 00:57:17.069 common/octeontx: not in enabled drivers build config 00:57:17.069 bus/auxiliary: not in enabled drivers build config 00:57:17.069 bus/cdx: not in enabled drivers build config 00:57:17.069 bus/dpaa: not in enabled drivers build config 00:57:17.069 bus/fslmc: not in enabled drivers build config 00:57:17.069 bus/ifpga: not in enabled drivers build config 00:57:17.069 bus/platform: not in enabled drivers build config 00:57:17.069 bus/uacce: not in enabled drivers build config 00:57:17.069 bus/vmbus: not in enabled drivers build config 00:57:17.069 common/cnxk: not in enabled drivers build config 00:57:17.069 common/mlx5: not in enabled drivers build config 00:57:17.069 common/nfp: not in enabled drivers build config 00:57:17.069 common/nitrox: not in enabled drivers build config 00:57:17.069 common/qat: not in enabled drivers build config 00:57:17.069 common/sfc_efx: not in enabled drivers build config 00:57:17.069 mempool/bucket: not in enabled drivers build config 00:57:17.069 mempool/cnxk: not in enabled drivers build config 00:57:17.069 mempool/dpaa: not in enabled drivers build config 00:57:17.069 mempool/dpaa2: not in enabled drivers build config 00:57:17.069 mempool/octeontx: not in enabled drivers build config 00:57:17.069 mempool/stack: not in enabled drivers build config 00:57:17.069 dma/cnxk: not in enabled drivers build config 00:57:17.069 dma/dpaa: not in enabled drivers build config 00:57:17.069 dma/dpaa2: not in enabled drivers build config 00:57:17.069 dma/hisilicon: not in enabled drivers build config 00:57:17.069 dma/idxd: not in enabled drivers build config 00:57:17.069 dma/ioat: not in enabled drivers build config 00:57:17.069 dma/skeleton: not in enabled drivers build config 00:57:17.069 net/af_packet: not in enabled drivers build config 00:57:17.069 net/af_xdp: not in enabled drivers build config 00:57:17.069 net/ark: not in enabled drivers build config 00:57:17.069 net/atlantic: not in enabled drivers build config 00:57:17.069 net/avp: not in enabled drivers build config 00:57:17.069 net/axgbe: not in enabled drivers build config 00:57:17.069 net/bnx2x: not in enabled drivers build config 00:57:17.069 net/bnxt: not in enabled drivers build config 00:57:17.069 net/bonding: not in enabled drivers build config 00:57:17.069 net/cnxk: not in enabled drivers build config 00:57:17.069 net/cpfl: not in enabled drivers build config 00:57:17.069 net/cxgbe: not in enabled drivers build config 00:57:17.069 net/dpaa: not in enabled drivers build config 00:57:17.069 net/dpaa2: not in enabled drivers build config 00:57:17.069 net/e1000: not in enabled drivers build config 00:57:17.069 net/ena: not in enabled drivers build config 00:57:17.069 net/enetc: not in enabled drivers build config 00:57:17.069 net/enetfec: not in enabled drivers build config 00:57:17.069 net/enic: not in enabled drivers build config 00:57:17.069 net/failsafe: not in enabled drivers build config 00:57:17.069 net/fm10k: not in enabled drivers build config 00:57:17.069 net/gve: not in enabled drivers build config 00:57:17.069 net/hinic: not in enabled drivers build config 00:57:17.069 net/hns3: not in enabled drivers build config 00:57:17.069 net/i40e: not in enabled drivers build config 00:57:17.069 net/iavf: not in enabled drivers build config 00:57:17.069 net/ice: not in enabled drivers build config 00:57:17.069 net/idpf: not in enabled drivers build config 00:57:17.069 net/igc: not in enabled drivers build config 00:57:17.069 net/ionic: not in enabled drivers build config 00:57:17.069 net/ipn3ke: not in enabled drivers build config 00:57:17.069 net/ixgbe: not in enabled drivers build config 00:57:17.069 net/mana: not in enabled drivers build config 00:57:17.069 net/memif: not in enabled drivers build config 00:57:17.069 net/mlx4: not in enabled drivers build config 00:57:17.069 net/mlx5: not in enabled drivers build config 00:57:17.069 net/mvneta: not in enabled drivers build config 00:57:17.069 net/mvpp2: not in enabled drivers build config 00:57:17.069 net/netvsc: not in enabled drivers build config 00:57:17.070 net/nfb: not in enabled drivers build config 00:57:17.070 net/nfp: not in enabled drivers build config 00:57:17.070 net/ngbe: not in enabled drivers build config 00:57:17.070 net/null: not in enabled drivers build config 00:57:17.070 net/octeontx: not in enabled drivers build config 00:57:17.070 net/octeon_ep: not in enabled drivers build config 00:57:17.070 net/pcap: not in enabled drivers build config 00:57:17.070 net/pfe: not in enabled drivers build config 00:57:17.070 net/qede: not in enabled drivers build config 00:57:17.070 net/ring: not in enabled drivers build config 00:57:17.070 net/sfc: not in enabled drivers build config 00:57:17.070 net/softnic: not in enabled drivers build config 00:57:17.070 net/tap: not in enabled drivers build config 00:57:17.070 net/thunderx: not in enabled drivers build config 00:57:17.070 net/txgbe: not in enabled drivers build config 00:57:17.070 net/vdev_netvsc: not in enabled drivers build config 00:57:17.070 net/vhost: not in enabled drivers build config 00:57:17.070 net/virtio: not in enabled drivers build config 00:57:17.070 net/vmxnet3: not in enabled drivers build config 00:57:17.070 raw/*: missing internal dependency, "rawdev" 00:57:17.070 crypto/armv8: not in enabled drivers build config 00:57:17.070 crypto/bcmfs: not in enabled drivers build config 00:57:17.070 crypto/caam_jr: not in enabled drivers build config 00:57:17.070 crypto/ccp: not in enabled drivers build config 00:57:17.070 crypto/cnxk: not in enabled drivers build config 00:57:17.070 crypto/dpaa_sec: not in enabled drivers build config 00:57:17.070 crypto/dpaa2_sec: not in enabled drivers build config 00:57:17.070 crypto/ipsec_mb: not in enabled drivers build config 00:57:17.070 crypto/mlx5: not in enabled drivers build config 00:57:17.070 crypto/mvsam: not in enabled drivers build config 00:57:17.070 crypto/nitrox: not in enabled drivers build config 00:57:17.070 crypto/null: not in enabled drivers build config 00:57:17.070 crypto/octeontx: not in enabled drivers build config 00:57:17.070 crypto/openssl: not in enabled drivers build config 00:57:17.070 crypto/scheduler: not in enabled drivers build config 00:57:17.070 crypto/uadk: not in enabled drivers build config 00:57:17.070 crypto/virtio: not in enabled drivers build config 00:57:17.070 compress/isal: not in enabled drivers build config 00:57:17.070 compress/mlx5: not in enabled drivers build config 00:57:17.070 compress/nitrox: not in enabled drivers build config 00:57:17.070 compress/octeontx: not in enabled drivers build config 00:57:17.070 compress/zlib: not in enabled drivers build config 00:57:17.070 regex/*: missing internal dependency, "regexdev" 00:57:17.070 ml/*: missing internal dependency, "mldev" 00:57:17.070 vdpa/ifc: not in enabled drivers build config 00:57:17.070 vdpa/mlx5: not in enabled drivers build config 00:57:17.070 vdpa/nfp: not in enabled drivers build config 00:57:17.070 vdpa/sfc: not in enabled drivers build config 00:57:17.070 event/*: missing internal dependency, "eventdev" 00:57:17.070 baseband/*: missing internal dependency, "bbdev" 00:57:17.070 gpu/*: missing internal dependency, "gpudev" 00:57:17.070 00:57:17.070 00:57:17.070 Build targets in project: 85 00:57:17.070 00:57:17.070 DPDK 24.03.0 00:57:17.070 00:57:17.070 User defined options 00:57:17.070 buildtype : debug 00:57:17.070 default_library : shared 00:57:17.070 libdir : lib 00:57:17.070 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:57:17.070 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:57:17.070 c_link_args : 00:57:17.070 cpu_instruction_set: native 00:57:17.070 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:57:17.070 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:57:17.070 enable_docs : false 00:57:17.070 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:57:17.070 enable_kmods : false 00:57:17.070 max_lcores : 128 00:57:17.070 tests : false 00:57:17.070 00:57:17.070 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:57:17.639 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:57:17.639 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:57:17.639 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:57:17.639 [3/268] Linking static target lib/librte_kvargs.a 00:57:17.639 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:57:17.639 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:57:17.639 [6/268] Linking static target lib/librte_log.a 00:57:17.898 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:57:17.898 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:57:17.898 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:57:17.898 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:57:17.898 [11/268] Linking static target lib/librte_telemetry.a 00:57:18.157 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:57:18.157 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:57:18.157 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:57:18.157 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:57:18.157 [16/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:57:18.157 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:57:18.157 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:57:18.417 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:57:18.676 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:57:18.676 [21/268] Linking target lib/librte_log.so.24.1 00:57:18.676 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:57:18.676 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:57:18.676 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:57:18.676 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:57:18.676 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:57:18.676 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:57:18.676 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:57:18.676 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:57:18.676 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:57:18.936 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:57:18.936 [32/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:57:18.936 [33/268] Linking target lib/librte_kvargs.so.24.1 00:57:19.195 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:57:19.195 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:57:19.195 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:57:19.195 [37/268] Linking target lib/librte_telemetry.so.24.1 00:57:19.195 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:57:19.195 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:57:19.195 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:57:19.195 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:57:19.195 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:57:19.195 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:57:19.195 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:57:19.195 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:57:19.195 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:57:19.455 [47/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:57:19.455 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:57:19.455 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:57:19.455 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:57:19.714 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:57:19.714 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:57:19.714 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:57:19.714 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:57:19.714 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:57:19.973 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:57:19.973 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:57:19.973 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:57:19.973 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:57:19.973 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:57:19.973 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:57:20.233 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:57:20.233 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:57:20.233 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:57:20.233 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:57:20.233 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:57:20.233 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:57:20.493 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:57:20.493 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:57:20.493 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:57:20.493 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:57:20.752 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:57:20.752 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:57:20.752 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:57:20.752 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:57:20.752 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:57:20.752 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:57:20.752 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:57:21.011 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:57:21.011 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:57:21.011 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:57:21.011 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:57:21.270 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:57:21.270 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:57:21.270 [85/268] Linking static target lib/librte_eal.a 00:57:21.270 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:57:21.270 [87/268] Linking static target lib/librte_ring.a 00:57:21.270 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:57:21.529 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:57:21.529 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:57:21.529 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:57:21.529 [92/268] Linking static target lib/librte_mempool.a 00:57:21.529 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:57:21.529 [94/268] Linking static target lib/librte_rcu.a 00:57:21.529 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:57:21.788 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:57:21.788 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:57:21.788 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:57:22.046 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:57:22.046 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:57:22.046 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:57:22.046 [102/268] Linking static target lib/librte_mbuf.a 00:57:22.046 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:57:22.046 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:57:22.046 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:57:22.046 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:57:22.046 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:57:22.304 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:57:22.305 [109/268] Linking static target lib/librte_meter.a 00:57:22.563 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:57:22.563 [111/268] Linking static target lib/librte_net.a 00:57:22.563 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:57:22.563 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:57:22.563 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:57:22.563 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:57:22.563 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:57:22.823 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:57:22.823 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:57:23.081 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:57:23.081 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:57:23.081 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:57:23.081 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:57:23.339 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:57:23.339 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:57:23.339 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:57:23.339 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:57:23.339 [127/268] Linking static target lib/librte_pci.a 00:57:23.597 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:57:23.597 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:57:23.597 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:57:23.597 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:57:23.597 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:57:23.597 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:57:23.597 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:57:23.597 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:57:23.597 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:57:23.855 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:57:23.855 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:57:23.855 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:57:23.855 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:57:23.855 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:57:23.855 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:57:23.855 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:57:23.855 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:57:23.855 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:57:23.855 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:57:23.855 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:57:23.855 [148/268] Linking static target lib/librte_cmdline.a 00:57:24.115 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:57:24.115 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:57:24.115 [151/268] Linking static target lib/librte_ethdev.a 00:57:24.115 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:57:24.115 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:57:24.374 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:57:24.374 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:57:24.633 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:57:24.633 [157/268] Linking static target lib/librte_timer.a 00:57:24.633 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:57:24.633 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:57:24.633 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:57:24.633 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:57:24.633 [162/268] Linking static target lib/librte_hash.a 00:57:24.633 [163/268] Linking static target lib/librte_compressdev.a 00:57:24.633 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:57:24.633 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:57:24.892 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:57:24.892 [167/268] Linking static target lib/librte_dmadev.a 00:57:25.152 [168/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:57:25.152 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:57:25.152 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:57:25.152 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:57:25.152 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:57:25.152 [173/268] Linking static target lib/librte_cryptodev.a 00:57:25.152 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:57:25.411 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:57:25.411 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:57:25.411 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:57:25.411 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:25.669 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:57:25.669 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:57:25.669 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:57:25.927 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:57:25.927 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:25.927 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:57:25.927 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:57:25.927 [186/268] Linking static target lib/librte_power.a 00:57:25.927 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:57:25.927 [188/268] Linking static target lib/librte_reorder.a 00:57:26.186 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:57:26.186 [190/268] Linking static target lib/librte_security.a 00:57:26.186 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:57:26.186 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:57:26.186 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:57:26.444 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:57:26.444 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:57:26.703 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:57:26.963 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:57:26.963 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:57:26.963 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:57:26.963 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:57:27.223 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:57:27.223 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:57:27.223 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:57:27.482 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:57:27.482 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:57:27.482 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:57:27.482 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:27.482 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:57:27.482 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:57:27.482 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:57:27.741 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:57:27.741 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:57:27.741 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:57:27.741 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:57:27.741 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:57:27.741 [216/268] Linking static target drivers/librte_bus_vdev.a 00:57:27.741 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:57:27.741 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:57:27.741 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:57:27.741 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:57:27.741 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:57:27.741 [222/268] Linking static target drivers/librte_bus_pci.a 00:57:28.000 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:57:28.000 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:57:28.000 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:57:28.000 [226/268] Linking static target drivers/librte_mempool_ring.a 00:57:28.000 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:28.259 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:57:28.826 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:57:28.826 [230/268] Linking static target lib/librte_vhost.a 00:57:31.357 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:57:33.261 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:57:33.521 [233/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:57:33.521 [234/268] Linking target lib/librte_eal.so.24.1 00:57:33.521 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:57:33.521 [236/268] Linking target lib/librte_pci.so.24.1 00:57:33.521 [237/268] Linking target lib/librte_ring.so.24.1 00:57:33.521 [238/268] Linking target lib/librte_meter.so.24.1 00:57:33.521 [239/268] Linking target lib/librte_timer.so.24.1 00:57:33.521 [240/268] Linking target lib/librte_dmadev.so.24.1 00:57:33.521 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:57:33.780 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:57:33.780 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:57:33.780 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:57:33.780 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:57:33.780 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:57:33.780 [247/268] Linking target lib/librte_rcu.so.24.1 00:57:33.780 [248/268] Linking target lib/librte_mempool.so.24.1 00:57:33.780 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:57:33.780 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:57:33.780 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:57:34.040 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:57:34.040 [253/268] Linking target lib/librte_mbuf.so.24.1 00:57:34.040 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:57:34.040 [255/268] Linking target lib/librte_reorder.so.24.1 00:57:34.040 [256/268] Linking target lib/librte_net.so.24.1 00:57:34.040 [257/268] Linking target lib/librte_compressdev.so.24.1 00:57:34.040 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:57:34.299 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:57:34.299 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:57:34.299 [261/268] Linking target lib/librte_cmdline.so.24.1 00:57:34.299 [262/268] Linking target lib/librte_hash.so.24.1 00:57:34.299 [263/268] Linking target lib/librte_security.so.24.1 00:57:34.299 [264/268] Linking target lib/librte_ethdev.so.24.1 00:57:34.558 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:57:34.558 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:57:34.558 [267/268] Linking target lib/librte_power.so.24.1 00:57:34.558 [268/268] Linking target lib/librte_vhost.so.24.1 00:57:34.558 INFO: autodetecting backend as ninja 00:57:34.558 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:57:52.647 CC lib/ut/ut.o 00:57:52.647 CC lib/log/log.o 00:57:52.647 CC lib/log/log_flags.o 00:57:52.647 CC lib/log/log_deprecated.o 00:57:52.647 CC lib/ut_mock/mock.o 00:57:52.647 LIB libspdk_ut.a 00:57:52.647 LIB libspdk_log.a 00:57:52.647 SO libspdk_ut.so.2.0 00:57:52.647 LIB libspdk_ut_mock.a 00:57:52.647 SO libspdk_log.so.7.1 00:57:52.647 SO libspdk_ut_mock.so.6.0 00:57:52.647 SYMLINK libspdk_ut.so 00:57:52.647 SYMLINK libspdk_log.so 00:57:52.647 SYMLINK libspdk_ut_mock.so 00:57:52.647 CC lib/ioat/ioat.o 00:57:52.647 CC lib/util/base64.o 00:57:52.647 CC lib/util/cpuset.o 00:57:52.647 CC lib/util/crc16.o 00:57:52.647 CC lib/util/bit_array.o 00:57:52.647 CC lib/util/crc32.o 00:57:52.647 CC lib/util/crc32c.o 00:57:52.647 CXX lib/trace_parser/trace.o 00:57:52.647 CC lib/dma/dma.o 00:57:52.647 CC lib/vfio_user/host/vfio_user_pci.o 00:57:52.647 CC lib/util/crc32_ieee.o 00:57:52.647 CC lib/vfio_user/host/vfio_user.o 00:57:52.647 CC lib/util/crc64.o 00:57:52.647 CC lib/util/dif.o 00:57:52.647 CC lib/util/fd.o 00:57:52.647 CC lib/util/fd_group.o 00:57:52.647 LIB libspdk_dma.a 00:57:52.647 SO libspdk_dma.so.5.0 00:57:52.647 CC lib/util/file.o 00:57:52.647 LIB libspdk_ioat.a 00:57:52.647 CC lib/util/hexlify.o 00:57:52.647 SO libspdk_ioat.so.7.0 00:57:52.647 SYMLINK libspdk_dma.so 00:57:52.647 CC lib/util/iov.o 00:57:52.647 CC lib/util/math.o 00:57:52.647 CC lib/util/net.o 00:57:52.647 LIB libspdk_vfio_user.a 00:57:52.647 SYMLINK libspdk_ioat.so 00:57:52.647 CC lib/util/pipe.o 00:57:52.647 SO libspdk_vfio_user.so.5.0 00:57:52.647 SYMLINK libspdk_vfio_user.so 00:57:52.647 CC lib/util/strerror_tls.o 00:57:52.647 CC lib/util/string.o 00:57:52.647 CC lib/util/uuid.o 00:57:52.647 CC lib/util/xor.o 00:57:52.647 CC lib/util/zipf.o 00:57:52.647 CC lib/util/md5.o 00:57:52.647 LIB libspdk_util.a 00:57:52.647 SO libspdk_util.so.10.1 00:57:52.648 LIB libspdk_trace_parser.a 00:57:52.648 SO libspdk_trace_parser.so.6.0 00:57:52.648 SYMLINK libspdk_util.so 00:57:52.648 SYMLINK libspdk_trace_parser.so 00:57:52.648 CC lib/conf/conf.o 00:57:52.648 CC lib/json/json_parse.o 00:57:52.648 CC lib/json/json_util.o 00:57:52.648 CC lib/json/json_write.o 00:57:52.648 CC lib/vmd/vmd.o 00:57:52.648 CC lib/vmd/led.o 00:57:52.648 CC lib/idxd/idxd.o 00:57:52.648 CC lib/idxd/idxd_user.o 00:57:52.648 CC lib/rdma_utils/rdma_utils.o 00:57:52.648 CC lib/env_dpdk/env.o 00:57:52.648 CC lib/env_dpdk/memory.o 00:57:52.648 LIB libspdk_conf.a 00:57:52.648 CC lib/env_dpdk/pci.o 00:57:52.648 CC lib/env_dpdk/init.o 00:57:52.648 SO libspdk_conf.so.6.0 00:57:52.648 CC lib/idxd/idxd_kernel.o 00:57:52.648 LIB libspdk_json.a 00:57:52.648 LIB libspdk_rdma_utils.a 00:57:52.648 SYMLINK libspdk_conf.so 00:57:52.648 CC lib/env_dpdk/threads.o 00:57:52.648 SO libspdk_json.so.6.0 00:57:52.648 SO libspdk_rdma_utils.so.1.0 00:57:52.648 SYMLINK libspdk_json.so 00:57:52.648 SYMLINK libspdk_rdma_utils.so 00:57:52.648 CC lib/env_dpdk/pci_ioat.o 00:57:52.648 CC lib/env_dpdk/pci_virtio.o 00:57:52.648 CC lib/env_dpdk/pci_vmd.o 00:57:52.648 CC lib/env_dpdk/pci_idxd.o 00:57:52.648 LIB libspdk_idxd.a 00:57:52.648 CC lib/jsonrpc/jsonrpc_server.o 00:57:52.648 SO libspdk_idxd.so.12.1 00:57:52.648 CC lib/env_dpdk/pci_event.o 00:57:52.648 CC lib/env_dpdk/sigbus_handler.o 00:57:52.648 LIB libspdk_vmd.a 00:57:52.648 SO libspdk_vmd.so.6.0 00:57:52.648 SYMLINK libspdk_idxd.so 00:57:52.648 CC lib/rdma_provider/common.o 00:57:52.648 CC lib/rdma_provider/rdma_provider_verbs.o 00:57:52.648 CC lib/env_dpdk/pci_dpdk.o 00:57:52.648 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:57:52.648 SYMLINK libspdk_vmd.so 00:57:52.648 CC lib/jsonrpc/jsonrpc_client.o 00:57:52.648 CC lib/env_dpdk/pci_dpdk_2207.o 00:57:52.648 CC lib/env_dpdk/pci_dpdk_2211.o 00:57:52.648 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:57:52.648 LIB libspdk_rdma_provider.a 00:57:52.648 SO libspdk_rdma_provider.so.7.0 00:57:52.648 LIB libspdk_jsonrpc.a 00:57:52.648 SYMLINK libspdk_rdma_provider.so 00:57:52.648 SO libspdk_jsonrpc.so.6.0 00:57:52.648 SYMLINK libspdk_jsonrpc.so 00:57:52.648 LIB libspdk_env_dpdk.a 00:57:52.648 SO libspdk_env_dpdk.so.15.1 00:57:52.648 SYMLINK libspdk_env_dpdk.so 00:57:52.648 CC lib/rpc/rpc.o 00:57:52.907 LIB libspdk_rpc.a 00:57:52.907 SO libspdk_rpc.so.6.0 00:57:53.167 SYMLINK libspdk_rpc.so 00:57:53.426 CC lib/notify/notify.o 00:57:53.426 CC lib/notify/notify_rpc.o 00:57:53.426 CC lib/trace/trace_flags.o 00:57:53.426 CC lib/trace/trace.o 00:57:53.426 CC lib/trace/trace_rpc.o 00:57:53.426 CC lib/keyring/keyring.o 00:57:53.426 CC lib/keyring/keyring_rpc.o 00:57:53.685 LIB libspdk_notify.a 00:57:53.685 SO libspdk_notify.so.6.0 00:57:53.685 LIB libspdk_trace.a 00:57:53.685 LIB libspdk_keyring.a 00:57:53.685 SO libspdk_trace.so.11.0 00:57:53.685 SYMLINK libspdk_notify.so 00:57:53.685 SO libspdk_keyring.so.2.0 00:57:53.944 SYMLINK libspdk_trace.so 00:57:53.944 SYMLINK libspdk_keyring.so 00:57:54.203 CC lib/thread/thread.o 00:57:54.203 CC lib/thread/iobuf.o 00:57:54.203 CC lib/sock/sock.o 00:57:54.203 CC lib/sock/sock_rpc.o 00:57:54.462 LIB libspdk_sock.a 00:57:54.722 SO libspdk_sock.so.10.0 00:57:54.722 SYMLINK libspdk_sock.so 00:57:54.982 CC lib/nvme/nvme_ctrlr_cmd.o 00:57:54.982 CC lib/nvme/nvme_ctrlr.o 00:57:54.982 CC lib/nvme/nvme_fabric.o 00:57:54.982 CC lib/nvme/nvme_ns_cmd.o 00:57:54.982 CC lib/nvme/nvme_ns.o 00:57:54.982 CC lib/nvme/nvme_pcie_common.o 00:57:54.982 CC lib/nvme/nvme_pcie.o 00:57:54.982 CC lib/nvme/nvme_qpair.o 00:57:54.982 CC lib/nvme/nvme.o 00:57:55.552 LIB libspdk_thread.a 00:57:55.552 SO libspdk_thread.so.11.0 00:57:55.552 SYMLINK libspdk_thread.so 00:57:55.552 CC lib/nvme/nvme_quirks.o 00:57:55.811 CC lib/nvme/nvme_transport.o 00:57:55.811 CC lib/nvme/nvme_discovery.o 00:57:55.811 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:57:55.811 CC lib/accel/accel.o 00:57:55.811 CC lib/blob/blobstore.o 00:57:55.811 CC lib/accel/accel_rpc.o 00:57:55.811 CC lib/blob/request.o 00:57:56.071 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:57:56.071 CC lib/blob/zeroes.o 00:57:56.071 CC lib/init/json_config.o 00:57:56.071 CC lib/init/subsystem.o 00:57:56.071 CC lib/init/subsystem_rpc.o 00:57:56.331 CC lib/init/rpc.o 00:57:56.331 CC lib/nvme/nvme_tcp.o 00:57:56.331 CC lib/nvme/nvme_opal.o 00:57:56.331 CC lib/accel/accel_sw.o 00:57:56.331 LIB libspdk_init.a 00:57:56.331 CC lib/virtio/virtio.o 00:57:56.331 SO libspdk_init.so.6.0 00:57:56.331 CC lib/fsdev/fsdev.o 00:57:56.590 CC lib/fsdev/fsdev_io.o 00:57:56.590 SYMLINK libspdk_init.so 00:57:56.590 CC lib/fsdev/fsdev_rpc.o 00:57:56.590 CC lib/nvme/nvme_io_msg.o 00:57:56.590 LIB libspdk_accel.a 00:57:56.590 CC lib/virtio/virtio_vhost_user.o 00:57:56.849 SO libspdk_accel.so.16.0 00:57:56.849 CC lib/nvme/nvme_poll_group.o 00:57:56.849 CC lib/blob/blob_bs_dev.o 00:57:56.849 SYMLINK libspdk_accel.so 00:57:56.849 CC lib/nvme/nvme_zns.o 00:57:56.849 CC lib/nvme/nvme_stubs.o 00:57:56.849 CC lib/event/app.o 00:57:56.849 CC lib/virtio/virtio_vfio_user.o 00:57:57.108 LIB libspdk_fsdev.a 00:57:57.108 CC lib/virtio/virtio_pci.o 00:57:57.108 SO libspdk_fsdev.so.2.0 00:57:57.108 SYMLINK libspdk_fsdev.so 00:57:57.108 CC lib/nvme/nvme_auth.o 00:57:57.366 CC lib/event/reactor.o 00:57:57.366 LIB libspdk_virtio.a 00:57:57.366 CC lib/bdev/bdev.o 00:57:57.366 SO libspdk_virtio.so.7.0 00:57:57.366 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:57:57.366 CC lib/nvme/nvme_cuse.o 00:57:57.366 CC lib/nvme/nvme_rdma.o 00:57:57.366 CC lib/event/log_rpc.o 00:57:57.366 SYMLINK libspdk_virtio.so 00:57:57.366 CC lib/bdev/bdev_rpc.o 00:57:57.627 CC lib/bdev/bdev_zone.o 00:57:57.627 CC lib/bdev/part.o 00:57:57.627 CC lib/event/app_rpc.o 00:57:57.627 CC lib/bdev/scsi_nvme.o 00:57:57.627 CC lib/event/scheduler_static.o 00:57:57.913 LIB libspdk_event.a 00:57:57.913 LIB libspdk_fuse_dispatcher.a 00:57:57.913 SO libspdk_event.so.14.0 00:57:57.913 SO libspdk_fuse_dispatcher.so.1.0 00:57:57.913 SYMLINK libspdk_event.so 00:57:57.913 SYMLINK libspdk_fuse_dispatcher.so 00:57:58.480 LIB libspdk_blob.a 00:57:58.480 SO libspdk_blob.so.12.0 00:57:58.480 SYMLINK libspdk_blob.so 00:57:58.480 LIB libspdk_nvme.a 00:57:58.738 SO libspdk_nvme.so.15.0 00:57:58.738 CC lib/lvol/lvol.o 00:57:58.997 CC lib/blobfs/blobfs.o 00:57:58.997 CC lib/blobfs/tree.o 00:57:58.997 SYMLINK libspdk_nvme.so 00:57:59.566 LIB libspdk_bdev.a 00:57:59.566 LIB libspdk_blobfs.a 00:57:59.566 SO libspdk_bdev.so.17.0 00:57:59.566 LIB libspdk_lvol.a 00:57:59.566 SO libspdk_blobfs.so.11.0 00:57:59.566 SO libspdk_lvol.so.11.0 00:57:59.566 SYMLINK libspdk_bdev.so 00:57:59.824 SYMLINK libspdk_blobfs.so 00:57:59.824 SYMLINK libspdk_lvol.so 00:57:59.824 CC lib/ftl/ftl_init.o 00:57:59.824 CC lib/ftl/ftl_layout.o 00:57:59.824 CC lib/ftl/ftl_core.o 00:57:59.824 CC lib/nbd/nbd.o 00:57:59.824 CC lib/ftl/ftl_io.o 00:57:59.824 CC lib/ftl/ftl_debug.o 00:57:59.824 CC lib/nvmf/ctrlr.o 00:57:59.824 CC lib/ftl/ftl_sb.o 00:58:00.083 CC lib/scsi/dev.o 00:58:00.083 CC lib/ublk/ublk.o 00:58:00.083 CC lib/nbd/nbd_rpc.o 00:58:00.083 CC lib/scsi/lun.o 00:58:00.083 CC lib/ftl/ftl_l2p.o 00:58:00.083 CC lib/ftl/ftl_l2p_flat.o 00:58:00.083 CC lib/ftl/ftl_nv_cache.o 00:58:00.342 CC lib/ftl/ftl_band.o 00:58:00.342 CC lib/ftl/ftl_band_ops.o 00:58:00.342 CC lib/ftl/ftl_writer.o 00:58:00.342 LIB libspdk_nbd.a 00:58:00.342 SO libspdk_nbd.so.7.0 00:58:00.342 CC lib/ftl/ftl_rq.o 00:58:00.342 CC lib/ftl/ftl_reloc.o 00:58:00.342 SYMLINK libspdk_nbd.so 00:58:00.342 CC lib/nvmf/ctrlr_discovery.o 00:58:00.342 CC lib/scsi/port.o 00:58:00.342 CC lib/ublk/ublk_rpc.o 00:58:00.600 CC lib/nvmf/ctrlr_bdev.o 00:58:00.600 CC lib/nvmf/subsystem.o 00:58:00.600 CC lib/scsi/scsi.o 00:58:00.600 CC lib/scsi/scsi_bdev.o 00:58:00.600 CC lib/scsi/scsi_pr.o 00:58:00.600 LIB libspdk_ublk.a 00:58:00.600 CC lib/nvmf/nvmf.o 00:58:00.600 SO libspdk_ublk.so.3.0 00:58:00.600 CC lib/nvmf/nvmf_rpc.o 00:58:00.600 SYMLINK libspdk_ublk.so 00:58:00.858 CC lib/nvmf/transport.o 00:58:00.858 CC lib/nvmf/tcp.o 00:58:00.858 CC lib/nvmf/stubs.o 00:58:00.858 CC lib/scsi/scsi_rpc.o 00:58:00.858 CC lib/ftl/ftl_l2p_cache.o 00:58:01.116 CC lib/ftl/ftl_p2l.o 00:58:01.116 CC lib/scsi/task.o 00:58:01.116 CC lib/ftl/ftl_p2l_log.o 00:58:01.375 LIB libspdk_scsi.a 00:58:01.375 CC lib/nvmf/mdns_server.o 00:58:01.375 SO libspdk_scsi.so.9.0 00:58:01.375 CC lib/nvmf/rdma.o 00:58:01.375 CC lib/nvmf/auth.o 00:58:01.375 CC lib/ftl/mngt/ftl_mngt.o 00:58:01.375 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:58:01.375 SYMLINK libspdk_scsi.so 00:58:01.375 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:58:01.375 CC lib/ftl/mngt/ftl_mngt_startup.o 00:58:01.633 CC lib/ftl/mngt/ftl_mngt_md.o 00:58:01.633 CC lib/ftl/mngt/ftl_mngt_misc.o 00:58:01.633 CC lib/iscsi/conn.o 00:58:01.633 CC lib/iscsi/init_grp.o 00:58:01.633 CC lib/iscsi/iscsi.o 00:58:01.633 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:58:01.633 CC lib/vhost/vhost.o 00:58:01.892 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:58:01.892 CC lib/ftl/mngt/ftl_mngt_band.o 00:58:01.892 CC lib/vhost/vhost_rpc.o 00:58:01.892 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:58:02.155 CC lib/iscsi/param.o 00:58:02.155 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:58:02.155 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:58:02.155 CC lib/vhost/vhost_scsi.o 00:58:02.155 CC lib/iscsi/portal_grp.o 00:58:02.155 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:58:02.155 CC lib/vhost/vhost_blk.o 00:58:02.414 CC lib/iscsi/tgt_node.o 00:58:02.414 CC lib/iscsi/iscsi_subsystem.o 00:58:02.415 CC lib/iscsi/iscsi_rpc.o 00:58:02.415 CC lib/iscsi/task.o 00:58:02.415 CC lib/ftl/utils/ftl_conf.o 00:58:02.415 CC lib/vhost/rte_vhost_user.o 00:58:02.674 CC lib/ftl/utils/ftl_md.o 00:58:02.674 CC lib/ftl/utils/ftl_mempool.o 00:58:02.674 CC lib/ftl/utils/ftl_bitmap.o 00:58:02.674 CC lib/ftl/utils/ftl_property.o 00:58:02.674 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:58:02.674 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:58:02.674 LIB libspdk_iscsi.a 00:58:02.674 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:58:02.934 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:58:02.934 SO libspdk_iscsi.so.8.0 00:58:02.934 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:58:02.934 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:58:02.934 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:58:02.934 CC lib/ftl/upgrade/ftl_sb_v3.o 00:58:02.934 CC lib/ftl/upgrade/ftl_sb_v5.o 00:58:02.934 SYMLINK libspdk_iscsi.so 00:58:02.934 CC lib/ftl/nvc/ftl_nvc_dev.o 00:58:02.934 LIB libspdk_nvmf.a 00:58:02.934 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:58:02.934 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:58:03.193 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:58:03.193 SO libspdk_nvmf.so.20.0 00:58:03.193 CC lib/ftl/base/ftl_base_dev.o 00:58:03.193 CC lib/ftl/base/ftl_base_bdev.o 00:58:03.194 CC lib/ftl/ftl_trace.o 00:58:03.194 SYMLINK libspdk_nvmf.so 00:58:03.453 LIB libspdk_ftl.a 00:58:03.453 LIB libspdk_vhost.a 00:58:03.453 SO libspdk_vhost.so.8.0 00:58:03.453 SYMLINK libspdk_vhost.so 00:58:03.712 SO libspdk_ftl.so.9.0 00:58:03.972 SYMLINK libspdk_ftl.so 00:58:04.232 CC module/env_dpdk/env_dpdk_rpc.o 00:58:04.491 CC module/scheduler/dynamic/scheduler_dynamic.o 00:58:04.491 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:58:04.491 CC module/fsdev/aio/fsdev_aio.o 00:58:04.491 CC module/scheduler/gscheduler/gscheduler.o 00:58:04.491 CC module/sock/posix/posix.o 00:58:04.491 CC module/keyring/linux/keyring.o 00:58:04.491 CC module/blob/bdev/blob_bdev.o 00:58:04.491 CC module/keyring/file/keyring.o 00:58:04.491 CC module/accel/error/accel_error.o 00:58:04.491 LIB libspdk_env_dpdk_rpc.a 00:58:04.491 SO libspdk_env_dpdk_rpc.so.6.0 00:58:04.491 SYMLINK libspdk_env_dpdk_rpc.so 00:58:04.491 CC module/fsdev/aio/fsdev_aio_rpc.o 00:58:04.491 LIB libspdk_scheduler_gscheduler.a 00:58:04.491 CC module/keyring/file/keyring_rpc.o 00:58:04.491 LIB libspdk_scheduler_dpdk_governor.a 00:58:04.491 CC module/keyring/linux/keyring_rpc.o 00:58:04.491 SO libspdk_scheduler_gscheduler.so.4.0 00:58:04.491 SO libspdk_scheduler_dpdk_governor.so.4.0 00:58:04.491 LIB libspdk_scheduler_dynamic.a 00:58:04.491 CC module/accel/error/accel_error_rpc.o 00:58:04.491 SYMLINK libspdk_scheduler_gscheduler.so 00:58:04.491 SO libspdk_scheduler_dynamic.so.4.0 00:58:04.491 CC module/fsdev/aio/linux_aio_mgr.o 00:58:04.491 SYMLINK libspdk_scheduler_dpdk_governor.so 00:58:04.751 LIB libspdk_blob_bdev.a 00:58:04.751 LIB libspdk_keyring_linux.a 00:58:04.751 LIB libspdk_keyring_file.a 00:58:04.751 SO libspdk_blob_bdev.so.12.0 00:58:04.751 SYMLINK libspdk_scheduler_dynamic.so 00:58:04.751 SO libspdk_keyring_linux.so.1.0 00:58:04.751 SO libspdk_keyring_file.so.2.0 00:58:04.751 SYMLINK libspdk_blob_bdev.so 00:58:04.751 LIB libspdk_accel_error.a 00:58:04.751 SYMLINK libspdk_keyring_file.so 00:58:04.751 SYMLINK libspdk_keyring_linux.so 00:58:04.751 SO libspdk_accel_error.so.2.0 00:58:04.751 CC module/sock/uring/uring.o 00:58:04.751 SYMLINK libspdk_accel_error.so 00:58:04.751 CC module/accel/ioat/accel_ioat.o 00:58:04.751 CC module/accel/ioat/accel_ioat_rpc.o 00:58:04.751 CC module/accel/dsa/accel_dsa.o 00:58:05.010 CC module/accel/iaa/accel_iaa.o 00:58:05.010 LIB libspdk_fsdev_aio.a 00:58:05.010 CC module/accel/dsa/accel_dsa_rpc.o 00:58:05.010 SO libspdk_fsdev_aio.so.1.0 00:58:05.010 CC module/blobfs/bdev/blobfs_bdev.o 00:58:05.010 CC module/bdev/delay/vbdev_delay.o 00:58:05.010 LIB libspdk_accel_ioat.a 00:58:05.010 CC module/bdev/error/vbdev_error.o 00:58:05.010 LIB libspdk_sock_posix.a 00:58:05.010 SO libspdk_accel_ioat.so.6.0 00:58:05.010 SYMLINK libspdk_fsdev_aio.so 00:58:05.010 CC module/bdev/error/vbdev_error_rpc.o 00:58:05.010 SO libspdk_sock_posix.so.6.0 00:58:05.010 LIB libspdk_accel_dsa.a 00:58:05.010 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:58:05.010 SYMLINK libspdk_accel_ioat.so 00:58:05.010 CC module/accel/iaa/accel_iaa_rpc.o 00:58:05.010 SO libspdk_accel_dsa.so.5.0 00:58:05.010 SYMLINK libspdk_sock_posix.so 00:58:05.270 SYMLINK libspdk_accel_dsa.so 00:58:05.270 CC module/bdev/delay/vbdev_delay_rpc.o 00:58:05.270 LIB libspdk_bdev_error.a 00:58:05.270 LIB libspdk_accel_iaa.a 00:58:05.270 SO libspdk_bdev_error.so.6.0 00:58:05.270 LIB libspdk_blobfs_bdev.a 00:58:05.270 CC module/bdev/gpt/gpt.o 00:58:05.270 SO libspdk_accel_iaa.so.3.0 00:58:05.270 SO libspdk_blobfs_bdev.so.6.0 00:58:05.270 CC module/bdev/malloc/bdev_malloc.o 00:58:05.270 CC module/bdev/lvol/vbdev_lvol.o 00:58:05.270 CC module/bdev/malloc/bdev_malloc_rpc.o 00:58:05.270 SYMLINK libspdk_bdev_error.so 00:58:05.270 SYMLINK libspdk_accel_iaa.so 00:58:05.270 SYMLINK libspdk_blobfs_bdev.so 00:58:05.270 CC module/bdev/null/bdev_null.o 00:58:05.270 LIB libspdk_bdev_delay.a 00:58:05.270 LIB libspdk_sock_uring.a 00:58:05.270 SO libspdk_bdev_delay.so.6.0 00:58:05.529 SO libspdk_sock_uring.so.5.0 00:58:05.529 CC module/bdev/gpt/vbdev_gpt.o 00:58:05.529 SYMLINK libspdk_bdev_delay.so 00:58:05.529 SYMLINK libspdk_sock_uring.so 00:58:05.529 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:58:05.529 CC module/bdev/nvme/bdev_nvme.o 00:58:05.529 CC module/bdev/passthru/vbdev_passthru.o 00:58:05.529 CC module/bdev/raid/bdev_raid.o 00:58:05.529 CC module/bdev/null/bdev_null_rpc.o 00:58:05.529 LIB libspdk_bdev_malloc.a 00:58:05.529 CC module/bdev/split/vbdev_split.o 00:58:05.529 CC module/bdev/zone_block/vbdev_zone_block.o 00:58:05.529 SO libspdk_bdev_malloc.so.6.0 00:58:05.789 LIB libspdk_bdev_gpt.a 00:58:05.789 SO libspdk_bdev_gpt.so.6.0 00:58:05.789 SYMLINK libspdk_bdev_malloc.so 00:58:05.789 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:58:05.789 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:58:05.789 LIB libspdk_bdev_null.a 00:58:05.789 SYMLINK libspdk_bdev_gpt.so 00:58:05.789 CC module/bdev/raid/bdev_raid_rpc.o 00:58:05.789 CC module/bdev/raid/bdev_raid_sb.o 00:58:05.789 LIB libspdk_bdev_lvol.a 00:58:05.789 SO libspdk_bdev_null.so.6.0 00:58:05.789 SO libspdk_bdev_lvol.so.6.0 00:58:05.789 CC module/bdev/split/vbdev_split_rpc.o 00:58:05.789 SYMLINK libspdk_bdev_null.so 00:58:05.789 CC module/bdev/raid/raid0.o 00:58:05.789 CC module/bdev/raid/raid1.o 00:58:05.789 SYMLINK libspdk_bdev_lvol.so 00:58:05.789 LIB libspdk_bdev_passthru.a 00:58:06.048 SO libspdk_bdev_passthru.so.6.0 00:58:06.048 LIB libspdk_bdev_zone_block.a 00:58:06.048 CC module/bdev/nvme/bdev_nvme_rpc.o 00:58:06.048 LIB libspdk_bdev_split.a 00:58:06.048 SO libspdk_bdev_zone_block.so.6.0 00:58:06.048 SYMLINK libspdk_bdev_passthru.so 00:58:06.048 CC module/bdev/nvme/nvme_rpc.o 00:58:06.048 SO libspdk_bdev_split.so.6.0 00:58:06.048 CC module/bdev/nvme/bdev_mdns_client.o 00:58:06.048 SYMLINK libspdk_bdev_zone_block.so 00:58:06.048 CC module/bdev/nvme/vbdev_opal.o 00:58:06.048 CC module/bdev/uring/bdev_uring.o 00:58:06.048 SYMLINK libspdk_bdev_split.so 00:58:06.048 CC module/bdev/raid/concat.o 00:58:06.048 CC module/bdev/nvme/vbdev_opal_rpc.o 00:58:06.048 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:58:06.308 CC module/bdev/uring/bdev_uring_rpc.o 00:58:06.308 LIB libspdk_bdev_raid.a 00:58:06.308 LIB libspdk_bdev_uring.a 00:58:06.308 SO libspdk_bdev_uring.so.6.0 00:58:06.308 SO libspdk_bdev_raid.so.6.0 00:58:06.567 CC module/bdev/aio/bdev_aio.o 00:58:06.567 CC module/bdev/aio/bdev_aio_rpc.o 00:58:06.567 SYMLINK libspdk_bdev_uring.so 00:58:06.567 CC module/bdev/ftl/bdev_ftl_rpc.o 00:58:06.567 CC module/bdev/ftl/bdev_ftl.o 00:58:06.567 CC module/bdev/virtio/bdev_virtio_scsi.o 00:58:06.567 CC module/bdev/iscsi/bdev_iscsi.o 00:58:06.567 CC module/bdev/virtio/bdev_virtio_blk.o 00:58:06.567 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:58:06.567 SYMLINK libspdk_bdev_raid.so 00:58:06.567 CC module/bdev/virtio/bdev_virtio_rpc.o 00:58:06.827 LIB libspdk_bdev_aio.a 00:58:06.827 LIB libspdk_bdev_ftl.a 00:58:06.827 SO libspdk_bdev_aio.so.6.0 00:58:06.827 SO libspdk_bdev_ftl.so.6.0 00:58:06.827 LIB libspdk_bdev_iscsi.a 00:58:06.827 SYMLINK libspdk_bdev_aio.so 00:58:06.827 SO libspdk_bdev_iscsi.so.6.0 00:58:06.827 SYMLINK libspdk_bdev_ftl.so 00:58:06.827 SYMLINK libspdk_bdev_iscsi.so 00:58:06.827 LIB libspdk_bdev_virtio.a 00:58:07.087 SO libspdk_bdev_virtio.so.6.0 00:58:07.087 SYMLINK libspdk_bdev_virtio.so 00:58:07.656 LIB libspdk_bdev_nvme.a 00:58:07.656 SO libspdk_bdev_nvme.so.7.1 00:58:07.916 SYMLINK libspdk_bdev_nvme.so 00:58:08.486 CC module/event/subsystems/scheduler/scheduler.o 00:58:08.486 CC module/event/subsystems/iobuf/iobuf.o 00:58:08.486 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:58:08.486 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:58:08.486 CC module/event/subsystems/keyring/keyring.o 00:58:08.486 CC module/event/subsystems/sock/sock.o 00:58:08.486 CC module/event/subsystems/fsdev/fsdev.o 00:58:08.486 CC module/event/subsystems/vmd/vmd.o 00:58:08.486 CC module/event/subsystems/vmd/vmd_rpc.o 00:58:08.486 LIB libspdk_event_vhost_blk.a 00:58:08.486 LIB libspdk_event_scheduler.a 00:58:08.486 LIB libspdk_event_fsdev.a 00:58:08.486 LIB libspdk_event_sock.a 00:58:08.486 LIB libspdk_event_iobuf.a 00:58:08.486 LIB libspdk_event_keyring.a 00:58:08.486 SO libspdk_event_vhost_blk.so.3.0 00:58:08.486 SO libspdk_event_scheduler.so.4.0 00:58:08.486 SO libspdk_event_fsdev.so.1.0 00:58:08.486 SO libspdk_event_sock.so.5.0 00:58:08.486 SO libspdk_event_iobuf.so.3.0 00:58:08.486 LIB libspdk_event_vmd.a 00:58:08.486 SO libspdk_event_keyring.so.1.0 00:58:08.746 SYMLINK libspdk_event_vhost_blk.so 00:58:08.746 SYMLINK libspdk_event_scheduler.so 00:58:08.746 SYMLINK libspdk_event_sock.so 00:58:08.746 SO libspdk_event_vmd.so.6.0 00:58:08.746 SYMLINK libspdk_event_fsdev.so 00:58:08.746 SYMLINK libspdk_event_iobuf.so 00:58:08.746 SYMLINK libspdk_event_keyring.so 00:58:08.746 SYMLINK libspdk_event_vmd.so 00:58:09.018 CC module/event/subsystems/accel/accel.o 00:58:09.322 LIB libspdk_event_accel.a 00:58:09.322 SO libspdk_event_accel.so.6.0 00:58:09.322 SYMLINK libspdk_event_accel.so 00:58:09.891 CC module/event/subsystems/bdev/bdev.o 00:58:09.891 LIB libspdk_event_bdev.a 00:58:09.891 SO libspdk_event_bdev.so.6.0 00:58:10.150 SYMLINK libspdk_event_bdev.so 00:58:10.410 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:58:10.410 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:58:10.410 CC module/event/subsystems/scsi/scsi.o 00:58:10.410 CC module/event/subsystems/nbd/nbd.o 00:58:10.410 CC module/event/subsystems/ublk/ublk.o 00:58:10.680 LIB libspdk_event_scsi.a 00:58:10.680 LIB libspdk_event_nbd.a 00:58:10.680 LIB libspdk_event_ublk.a 00:58:10.680 SO libspdk_event_scsi.so.6.0 00:58:10.680 LIB libspdk_event_nvmf.a 00:58:10.680 SO libspdk_event_ublk.so.3.0 00:58:10.680 SO libspdk_event_nbd.so.6.0 00:58:10.680 SYMLINK libspdk_event_scsi.so 00:58:10.680 SO libspdk_event_nvmf.so.6.0 00:58:10.680 SYMLINK libspdk_event_ublk.so 00:58:10.680 SYMLINK libspdk_event_nbd.so 00:58:10.681 SYMLINK libspdk_event_nvmf.so 00:58:10.943 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:58:10.943 CC module/event/subsystems/iscsi/iscsi.o 00:58:11.202 LIB libspdk_event_iscsi.a 00:58:11.202 LIB libspdk_event_vhost_scsi.a 00:58:11.202 SO libspdk_event_iscsi.so.6.0 00:58:11.202 SO libspdk_event_vhost_scsi.so.3.0 00:58:11.202 SYMLINK libspdk_event_iscsi.so 00:58:11.202 SYMLINK libspdk_event_vhost_scsi.so 00:58:11.461 SO libspdk.so.6.0 00:58:11.461 SYMLINK libspdk.so 00:58:12.028 TEST_HEADER include/spdk/accel.h 00:58:12.028 CC app/trace_record/trace_record.o 00:58:12.028 TEST_HEADER include/spdk/accel_module.h 00:58:12.028 CC test/rpc_client/rpc_client_test.o 00:58:12.028 TEST_HEADER include/spdk/assert.h 00:58:12.028 TEST_HEADER include/spdk/barrier.h 00:58:12.028 CXX app/trace/trace.o 00:58:12.028 TEST_HEADER include/spdk/base64.h 00:58:12.028 TEST_HEADER include/spdk/bdev.h 00:58:12.028 TEST_HEADER include/spdk/bdev_module.h 00:58:12.028 TEST_HEADER include/spdk/bdev_zone.h 00:58:12.028 TEST_HEADER include/spdk/bit_array.h 00:58:12.028 TEST_HEADER include/spdk/bit_pool.h 00:58:12.028 TEST_HEADER include/spdk/blob_bdev.h 00:58:12.028 TEST_HEADER include/spdk/blobfs_bdev.h 00:58:12.028 TEST_HEADER include/spdk/blobfs.h 00:58:12.028 TEST_HEADER include/spdk/blob.h 00:58:12.028 CC app/nvmf_tgt/nvmf_main.o 00:58:12.028 TEST_HEADER include/spdk/conf.h 00:58:12.028 TEST_HEADER include/spdk/config.h 00:58:12.028 TEST_HEADER include/spdk/cpuset.h 00:58:12.028 TEST_HEADER include/spdk/crc16.h 00:58:12.028 TEST_HEADER include/spdk/crc32.h 00:58:12.028 TEST_HEADER include/spdk/crc64.h 00:58:12.028 TEST_HEADER include/spdk/dif.h 00:58:12.028 TEST_HEADER include/spdk/dma.h 00:58:12.028 TEST_HEADER include/spdk/endian.h 00:58:12.028 TEST_HEADER include/spdk/env_dpdk.h 00:58:12.028 CC examples/util/zipf/zipf.o 00:58:12.028 TEST_HEADER include/spdk/env.h 00:58:12.028 TEST_HEADER include/spdk/event.h 00:58:12.028 CC test/thread/poller_perf/poller_perf.o 00:58:12.028 TEST_HEADER include/spdk/fd_group.h 00:58:12.028 TEST_HEADER include/spdk/fd.h 00:58:12.028 TEST_HEADER include/spdk/file.h 00:58:12.028 TEST_HEADER include/spdk/fsdev.h 00:58:12.028 TEST_HEADER include/spdk/fsdev_module.h 00:58:12.028 TEST_HEADER include/spdk/ftl.h 00:58:12.028 TEST_HEADER include/spdk/fuse_dispatcher.h 00:58:12.028 TEST_HEADER include/spdk/gpt_spec.h 00:58:12.028 CC test/dma/test_dma/test_dma.o 00:58:12.028 TEST_HEADER include/spdk/hexlify.h 00:58:12.028 TEST_HEADER include/spdk/histogram_data.h 00:58:12.028 TEST_HEADER include/spdk/idxd.h 00:58:12.028 TEST_HEADER include/spdk/idxd_spec.h 00:58:12.028 TEST_HEADER include/spdk/init.h 00:58:12.028 TEST_HEADER include/spdk/ioat.h 00:58:12.028 TEST_HEADER include/spdk/ioat_spec.h 00:58:12.028 TEST_HEADER include/spdk/iscsi_spec.h 00:58:12.028 CC test/app/bdev_svc/bdev_svc.o 00:58:12.029 TEST_HEADER include/spdk/json.h 00:58:12.029 TEST_HEADER include/spdk/jsonrpc.h 00:58:12.029 TEST_HEADER include/spdk/keyring.h 00:58:12.029 TEST_HEADER include/spdk/keyring_module.h 00:58:12.029 TEST_HEADER include/spdk/likely.h 00:58:12.029 TEST_HEADER include/spdk/log.h 00:58:12.029 TEST_HEADER include/spdk/lvol.h 00:58:12.029 TEST_HEADER include/spdk/md5.h 00:58:12.029 TEST_HEADER include/spdk/memory.h 00:58:12.029 TEST_HEADER include/spdk/mmio.h 00:58:12.029 TEST_HEADER include/spdk/nbd.h 00:58:12.029 TEST_HEADER include/spdk/net.h 00:58:12.029 LINK rpc_client_test 00:58:12.029 TEST_HEADER include/spdk/notify.h 00:58:12.029 TEST_HEADER include/spdk/nvme.h 00:58:12.029 TEST_HEADER include/spdk/nvme_intel.h 00:58:12.029 TEST_HEADER include/spdk/nvme_ocssd.h 00:58:12.029 CC test/env/mem_callbacks/mem_callbacks.o 00:58:12.029 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:58:12.029 TEST_HEADER include/spdk/nvme_spec.h 00:58:12.029 TEST_HEADER include/spdk/nvme_zns.h 00:58:12.029 TEST_HEADER include/spdk/nvmf_cmd.h 00:58:12.029 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:58:12.029 TEST_HEADER include/spdk/nvmf.h 00:58:12.029 TEST_HEADER include/spdk/nvmf_spec.h 00:58:12.029 TEST_HEADER include/spdk/nvmf_transport.h 00:58:12.029 TEST_HEADER include/spdk/opal.h 00:58:12.029 TEST_HEADER include/spdk/opal_spec.h 00:58:12.029 TEST_HEADER include/spdk/pci_ids.h 00:58:12.029 TEST_HEADER include/spdk/pipe.h 00:58:12.029 TEST_HEADER include/spdk/queue.h 00:58:12.029 TEST_HEADER include/spdk/reduce.h 00:58:12.029 TEST_HEADER include/spdk/rpc.h 00:58:12.029 TEST_HEADER include/spdk/scheduler.h 00:58:12.029 TEST_HEADER include/spdk/scsi.h 00:58:12.029 TEST_HEADER include/spdk/scsi_spec.h 00:58:12.029 LINK spdk_trace_record 00:58:12.029 LINK zipf 00:58:12.029 TEST_HEADER include/spdk/sock.h 00:58:12.029 TEST_HEADER include/spdk/stdinc.h 00:58:12.029 TEST_HEADER include/spdk/string.h 00:58:12.029 LINK nvmf_tgt 00:58:12.029 TEST_HEADER include/spdk/thread.h 00:58:12.029 LINK poller_perf 00:58:12.029 TEST_HEADER include/spdk/trace.h 00:58:12.029 TEST_HEADER include/spdk/trace_parser.h 00:58:12.029 TEST_HEADER include/spdk/tree.h 00:58:12.029 TEST_HEADER include/spdk/ublk.h 00:58:12.029 TEST_HEADER include/spdk/util.h 00:58:12.029 TEST_HEADER include/spdk/uuid.h 00:58:12.029 TEST_HEADER include/spdk/version.h 00:58:12.029 TEST_HEADER include/spdk/vfio_user_pci.h 00:58:12.029 TEST_HEADER include/spdk/vfio_user_spec.h 00:58:12.029 TEST_HEADER include/spdk/vhost.h 00:58:12.029 TEST_HEADER include/spdk/vmd.h 00:58:12.029 TEST_HEADER include/spdk/xor.h 00:58:12.029 TEST_HEADER include/spdk/zipf.h 00:58:12.029 CXX test/cpp_headers/accel.o 00:58:12.287 LINK bdev_svc 00:58:12.287 CXX test/cpp_headers/accel_module.o 00:58:12.287 LINK spdk_trace 00:58:12.287 CXX test/cpp_headers/assert.o 00:58:12.287 CXX test/cpp_headers/barrier.o 00:58:12.287 CXX test/cpp_headers/base64.o 00:58:12.287 CC test/env/vtophys/vtophys.o 00:58:12.287 CC examples/ioat/perf/perf.o 00:58:12.546 LINK test_dma 00:58:12.546 CC examples/ioat/verify/verify.o 00:58:12.546 CXX test/cpp_headers/bdev.o 00:58:12.546 CC app/iscsi_tgt/iscsi_tgt.o 00:58:12.546 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:58:12.546 CC test/env/memory/memory_ut.o 00:58:12.546 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:58:12.546 LINK vtophys 00:58:12.546 LINK mem_callbacks 00:58:12.546 LINK ioat_perf 00:58:12.546 CXX test/cpp_headers/bdev_module.o 00:58:12.546 LINK env_dpdk_post_init 00:58:12.546 LINK verify 00:58:12.806 LINK iscsi_tgt 00:58:12.806 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:58:12.806 CXX test/cpp_headers/bdev_zone.o 00:58:12.806 CC test/env/pci/pci_ut.o 00:58:12.806 CXX test/cpp_headers/bit_array.o 00:58:12.806 CXX test/cpp_headers/bit_pool.o 00:58:12.806 CXX test/cpp_headers/blob_bdev.o 00:58:12.806 CC examples/vmd/lsvmd/lsvmd.o 00:58:12.806 LINK nvme_fuzz 00:58:12.806 CXX test/cpp_headers/blobfs_bdev.o 00:58:13.066 CXX test/cpp_headers/blobfs.o 00:58:13.066 LINK lsvmd 00:58:13.066 CC app/spdk_tgt/spdk_tgt.o 00:58:13.066 CC test/app/histogram_perf/histogram_perf.o 00:58:13.066 CC examples/vmd/led/led.o 00:58:13.066 CXX test/cpp_headers/blob.o 00:58:13.066 LINK pci_ut 00:58:13.066 LINK histogram_perf 00:58:13.066 LINK led 00:58:13.066 CC examples/idxd/perf/perf.o 00:58:13.326 LINK spdk_tgt 00:58:13.326 CXX test/cpp_headers/conf.o 00:58:13.326 CC test/event/event_perf/event_perf.o 00:58:13.326 CC test/nvme/aer/aer.o 00:58:13.326 CC test/nvme/reset/reset.o 00:58:13.326 CC test/nvme/sgl/sgl.o 00:58:13.326 CXX test/cpp_headers/config.o 00:58:13.326 LINK event_perf 00:58:13.326 CXX test/cpp_headers/cpuset.o 00:58:13.326 CC examples/interrupt_tgt/interrupt_tgt.o 00:58:13.586 CC app/spdk_lspci/spdk_lspci.o 00:58:13.586 LINK idxd_perf 00:58:13.586 LINK memory_ut 00:58:13.586 LINK aer 00:58:13.586 CXX test/cpp_headers/crc16.o 00:58:13.586 LINK reset 00:58:13.586 LINK spdk_lspci 00:58:13.586 CC test/event/reactor/reactor.o 00:58:13.586 LINK sgl 00:58:13.586 LINK interrupt_tgt 00:58:13.586 CC test/event/reactor_perf/reactor_perf.o 00:58:13.845 CC test/event/app_repeat/app_repeat.o 00:58:13.845 CXX test/cpp_headers/crc32.o 00:58:13.845 LINK reactor 00:58:13.845 CC test/event/scheduler/scheduler.o 00:58:13.845 LINK reactor_perf 00:58:13.845 CC app/spdk_nvme_perf/perf.o 00:58:13.845 CC test/nvme/e2edp/nvme_dp.o 00:58:13.845 CC test/nvme/overhead/overhead.o 00:58:13.845 CXX test/cpp_headers/crc64.o 00:58:13.845 LINK app_repeat 00:58:13.845 CC examples/thread/thread/thread_ex.o 00:58:14.105 LINK scheduler 00:58:14.105 CXX test/cpp_headers/dif.o 00:58:14.105 LINK iscsi_fuzz 00:58:14.105 CC test/accel/dif/dif.o 00:58:14.105 LINK nvme_dp 00:58:14.105 LINK overhead 00:58:14.105 CC test/blobfs/mkfs/mkfs.o 00:58:14.105 LINK thread 00:58:14.105 CXX test/cpp_headers/dma.o 00:58:14.365 CXX test/cpp_headers/endian.o 00:58:14.365 CC test/lvol/esnap/esnap.o 00:58:14.365 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:58:14.365 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:58:14.365 LINK mkfs 00:58:14.365 CC test/nvme/err_injection/err_injection.o 00:58:14.365 CXX test/cpp_headers/env_dpdk.o 00:58:14.365 CXX test/cpp_headers/env.o 00:58:14.625 CXX test/cpp_headers/event.o 00:58:14.625 LINK spdk_nvme_perf 00:58:14.625 LINK err_injection 00:58:14.625 CC examples/sock/hello_world/hello_sock.o 00:58:14.625 CC examples/fsdev/hello_world/hello_fsdev.o 00:58:14.625 LINK dif 00:58:14.625 LINK vhost_fuzz 00:58:14.625 CC test/app/jsoncat/jsoncat.o 00:58:14.625 CXX test/cpp_headers/fd_group.o 00:58:14.625 CC test/app/stub/stub.o 00:58:14.885 CC test/nvme/startup/startup.o 00:58:14.885 LINK hello_sock 00:58:14.885 CXX test/cpp_headers/fd.o 00:58:14.885 CC app/spdk_nvme_identify/identify.o 00:58:14.885 LINK jsoncat 00:58:14.885 LINK stub 00:58:14.885 LINK hello_fsdev 00:58:14.885 CC test/nvme/reserve/reserve.o 00:58:14.885 CC test/nvme/simple_copy/simple_copy.o 00:58:14.885 LINK startup 00:58:14.885 CXX test/cpp_headers/file.o 00:58:15.145 CC test/nvme/connect_stress/connect_stress.o 00:58:15.145 CC test/nvme/boot_partition/boot_partition.o 00:58:15.145 CC test/nvme/compliance/nvme_compliance.o 00:58:15.145 LINK reserve 00:58:15.145 CXX test/cpp_headers/fsdev.o 00:58:15.145 LINK simple_copy 00:58:15.145 LINK boot_partition 00:58:15.145 LINK connect_stress 00:58:15.145 CC examples/accel/perf/accel_perf.o 00:58:15.145 CC examples/blob/hello_world/hello_blob.o 00:58:15.405 CXX test/cpp_headers/fsdev_module.o 00:58:15.405 CC examples/blob/cli/blobcli.o 00:58:15.405 CC test/nvme/fused_ordering/fused_ordering.o 00:58:15.405 LINK nvme_compliance 00:58:15.405 CC examples/nvme/hello_world/hello_world.o 00:58:15.405 CXX test/cpp_headers/ftl.o 00:58:15.405 LINK spdk_nvme_identify 00:58:15.405 LINK hello_blob 00:58:15.405 CC test/bdev/bdevio/bdevio.o 00:58:15.665 LINK fused_ordering 00:58:15.665 LINK accel_perf 00:58:15.665 CXX test/cpp_headers/fuse_dispatcher.o 00:58:15.665 LINK hello_world 00:58:15.665 CXX test/cpp_headers/gpt_spec.o 00:58:15.665 CC test/nvme/doorbell_aers/doorbell_aers.o 00:58:15.665 CC app/spdk_nvme_discover/discovery_aer.o 00:58:15.665 LINK blobcli 00:58:15.925 CXX test/cpp_headers/hexlify.o 00:58:15.925 CC test/nvme/fdp/fdp.o 00:58:15.925 LINK doorbell_aers 00:58:15.925 CC test/nvme/cuse/cuse.o 00:58:15.925 CC app/spdk_top/spdk_top.o 00:58:15.925 CC examples/nvme/reconnect/reconnect.o 00:58:15.925 LINK spdk_nvme_discover 00:58:15.925 LINK bdevio 00:58:15.925 CXX test/cpp_headers/histogram_data.o 00:58:15.925 CXX test/cpp_headers/idxd.o 00:58:16.185 CXX test/cpp_headers/idxd_spec.o 00:58:16.185 CC app/vhost/vhost.o 00:58:16.185 LINK fdp 00:58:16.185 CC examples/nvme/nvme_manage/nvme_manage.o 00:58:16.185 LINK reconnect 00:58:16.185 CC examples/nvme/arbitration/arbitration.o 00:58:16.185 CXX test/cpp_headers/init.o 00:58:16.185 LINK vhost 00:58:16.185 CC app/spdk_dd/spdk_dd.o 00:58:16.185 CXX test/cpp_headers/ioat.o 00:58:16.444 CXX test/cpp_headers/ioat_spec.o 00:58:16.444 CC examples/nvme/hotplug/hotplug.o 00:58:16.444 CXX test/cpp_headers/iscsi_spec.o 00:58:16.444 LINK arbitration 00:58:16.444 CC examples/nvme/cmb_copy/cmb_copy.o 00:58:16.444 CC examples/nvme/abort/abort.o 00:58:16.705 LINK spdk_top 00:58:16.705 LINK nvme_manage 00:58:16.705 CXX test/cpp_headers/json.o 00:58:16.705 LINK spdk_dd 00:58:16.705 LINK hotplug 00:58:16.705 LINK cmb_copy 00:58:16.705 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:58:16.705 CXX test/cpp_headers/jsonrpc.o 00:58:16.964 CXX test/cpp_headers/keyring.o 00:58:16.964 CXX test/cpp_headers/keyring_module.o 00:58:16.964 LINK abort 00:58:16.964 LINK pmr_persistence 00:58:16.964 CC app/fio/nvme/fio_plugin.o 00:58:16.964 CC app/fio/bdev/fio_plugin.o 00:58:16.964 CC examples/bdev/hello_world/hello_bdev.o 00:58:16.964 LINK cuse 00:58:16.964 CXX test/cpp_headers/likely.o 00:58:16.964 CXX test/cpp_headers/log.o 00:58:16.964 CXX test/cpp_headers/lvol.o 00:58:16.964 CXX test/cpp_headers/md5.o 00:58:16.964 CC examples/bdev/bdevperf/bdevperf.o 00:58:17.224 CXX test/cpp_headers/memory.o 00:58:17.224 CXX test/cpp_headers/mmio.o 00:58:17.224 CXX test/cpp_headers/nbd.o 00:58:17.224 LINK hello_bdev 00:58:17.224 CXX test/cpp_headers/net.o 00:58:17.224 CXX test/cpp_headers/notify.o 00:58:17.224 CXX test/cpp_headers/nvme.o 00:58:17.224 CXX test/cpp_headers/nvme_intel.o 00:58:17.224 CXX test/cpp_headers/nvme_ocssd.o 00:58:17.224 CXX test/cpp_headers/nvme_ocssd_spec.o 00:58:17.224 CXX test/cpp_headers/nvme_spec.o 00:58:17.224 CXX test/cpp_headers/nvme_zns.o 00:58:17.484 LINK spdk_bdev 00:58:17.484 LINK spdk_nvme 00:58:17.484 CXX test/cpp_headers/nvmf_cmd.o 00:58:17.484 CXX test/cpp_headers/nvmf_fc_spec.o 00:58:17.484 CXX test/cpp_headers/nvmf.o 00:58:17.484 CXX test/cpp_headers/nvmf_spec.o 00:58:17.484 CXX test/cpp_headers/nvmf_transport.o 00:58:17.484 CXX test/cpp_headers/opal.o 00:58:17.484 CXX test/cpp_headers/opal_spec.o 00:58:17.484 CXX test/cpp_headers/pci_ids.o 00:58:17.484 CXX test/cpp_headers/pipe.o 00:58:17.744 CXX test/cpp_headers/queue.o 00:58:17.744 CXX test/cpp_headers/reduce.o 00:58:17.744 CXX test/cpp_headers/rpc.o 00:58:17.744 CXX test/cpp_headers/scheduler.o 00:58:17.744 CXX test/cpp_headers/scsi.o 00:58:17.744 CXX test/cpp_headers/scsi_spec.o 00:58:17.744 CXX test/cpp_headers/sock.o 00:58:17.744 CXX test/cpp_headers/stdinc.o 00:58:17.744 CXX test/cpp_headers/string.o 00:58:17.744 LINK bdevperf 00:58:17.744 CXX test/cpp_headers/thread.o 00:58:17.744 CXX test/cpp_headers/trace.o 00:58:17.744 CXX test/cpp_headers/trace_parser.o 00:58:17.744 CXX test/cpp_headers/tree.o 00:58:17.744 CXX test/cpp_headers/ublk.o 00:58:17.744 CXX test/cpp_headers/util.o 00:58:17.744 CXX test/cpp_headers/uuid.o 00:58:17.744 CXX test/cpp_headers/version.o 00:58:18.004 CXX test/cpp_headers/vfio_user_pci.o 00:58:18.004 CXX test/cpp_headers/vfio_user_spec.o 00:58:18.004 CXX test/cpp_headers/vhost.o 00:58:18.004 CXX test/cpp_headers/vmd.o 00:58:18.004 CXX test/cpp_headers/xor.o 00:58:18.004 CXX test/cpp_headers/zipf.o 00:58:18.264 CC examples/nvmf/nvmf/nvmf.o 00:58:18.524 LINK esnap 00:58:18.524 LINK nvmf 00:58:18.785 ************************************ 00:58:18.785 END TEST make 00:58:18.785 ************************************ 00:58:18.785 00:58:18.785 real 1m11.787s 00:58:18.785 user 6m10.181s 00:58:18.785 sys 1m46.647s 00:58:18.785 05:57:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:58:18.785 05:57:13 make -- common/autotest_common.sh@10 -- $ set +x 00:58:19.045 05:57:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:58:19.045 05:57:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:58:19.045 05:57:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:58:19.045 05:57:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:58:19.045 05:57:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:58:19.045 05:57:13 -- pm/common@44 -- $ pid=5243 00:58:19.045 05:57:13 -- pm/common@50 -- $ kill -TERM 5243 00:58:19.045 05:57:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:58:19.045 05:57:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:58:19.045 05:57:13 -- pm/common@44 -- $ pid=5245 00:58:19.045 05:57:13 -- pm/common@50 -- $ kill -TERM 5245 00:58:19.045 05:57:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:58:19.045 05:57:13 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:58:19.045 05:57:13 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:19.045 05:57:13 -- common/autotest_common.sh@1711 -- # lcov --version 00:58:19.045 05:57:13 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:19.045 05:57:13 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:19.045 05:57:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:19.045 05:57:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:19.045 05:57:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:19.045 05:57:13 -- scripts/common.sh@336 -- # IFS=.-: 00:58:19.045 05:57:13 -- scripts/common.sh@336 -- # read -ra ver1 00:58:19.045 05:57:13 -- scripts/common.sh@337 -- # IFS=.-: 00:58:19.045 05:57:13 -- scripts/common.sh@337 -- # read -ra ver2 00:58:19.045 05:57:13 -- scripts/common.sh@338 -- # local 'op=<' 00:58:19.045 05:57:13 -- scripts/common.sh@340 -- # ver1_l=2 00:58:19.045 05:57:13 -- scripts/common.sh@341 -- # ver2_l=1 00:58:19.045 05:57:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:19.045 05:57:13 -- scripts/common.sh@344 -- # case "$op" in 00:58:19.045 05:57:13 -- scripts/common.sh@345 -- # : 1 00:58:19.045 05:57:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:19.045 05:57:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:19.045 05:57:13 -- scripts/common.sh@365 -- # decimal 1 00:58:19.045 05:57:13 -- scripts/common.sh@353 -- # local d=1 00:58:19.045 05:57:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:19.045 05:57:13 -- scripts/common.sh@355 -- # echo 1 00:58:19.045 05:57:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:58:19.045 05:57:13 -- scripts/common.sh@366 -- # decimal 2 00:58:19.045 05:57:13 -- scripts/common.sh@353 -- # local d=2 00:58:19.045 05:57:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:19.045 05:57:13 -- scripts/common.sh@355 -- # echo 2 00:58:19.045 05:57:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:58:19.045 05:57:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:19.045 05:57:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:19.045 05:57:13 -- scripts/common.sh@368 -- # return 0 00:58:19.045 05:57:13 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:19.045 05:57:13 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:19.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:19.045 --rc genhtml_branch_coverage=1 00:58:19.045 --rc genhtml_function_coverage=1 00:58:19.045 --rc genhtml_legend=1 00:58:19.045 --rc geninfo_all_blocks=1 00:58:19.045 --rc geninfo_unexecuted_blocks=1 00:58:19.045 00:58:19.045 ' 00:58:19.045 05:57:13 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:19.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:19.045 --rc genhtml_branch_coverage=1 00:58:19.045 --rc genhtml_function_coverage=1 00:58:19.045 --rc genhtml_legend=1 00:58:19.045 --rc geninfo_all_blocks=1 00:58:19.045 --rc geninfo_unexecuted_blocks=1 00:58:19.045 00:58:19.045 ' 00:58:19.045 05:57:13 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:19.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:19.045 --rc genhtml_branch_coverage=1 00:58:19.045 --rc genhtml_function_coverage=1 00:58:19.045 --rc genhtml_legend=1 00:58:19.045 --rc geninfo_all_blocks=1 00:58:19.045 --rc geninfo_unexecuted_blocks=1 00:58:19.045 00:58:19.045 ' 00:58:19.045 05:57:13 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:19.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:19.045 --rc genhtml_branch_coverage=1 00:58:19.045 --rc genhtml_function_coverage=1 00:58:19.045 --rc genhtml_legend=1 00:58:19.045 --rc geninfo_all_blocks=1 00:58:19.045 --rc geninfo_unexecuted_blocks=1 00:58:19.045 00:58:19.045 ' 00:58:19.045 05:57:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:58:19.045 05:57:13 -- nvmf/common.sh@7 -- # uname -s 00:58:19.045 05:57:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:19.045 05:57:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:19.045 05:57:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:19.045 05:57:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:19.045 05:57:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:19.045 05:57:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:19.045 05:57:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:19.045 05:57:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:19.045 05:57:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:19.046 05:57:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:19.306 05:57:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 00:58:19.306 05:57:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 00:58:19.306 05:57:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:19.306 05:57:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:19.306 05:57:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:58:19.306 05:57:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:19.306 05:57:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:58:19.306 05:57:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:58:19.306 05:57:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:19.306 05:57:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:19.306 05:57:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:19.306 05:57:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:19.306 05:57:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:19.306 05:57:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:19.306 05:57:13 -- paths/export.sh@5 -- # export PATH 00:58:19.306 05:57:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:19.306 05:57:13 -- nvmf/common.sh@51 -- # : 0 00:58:19.306 05:57:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:19.306 05:57:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:19.306 05:57:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:19.306 05:57:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:19.306 05:57:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:19.306 05:57:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:19.306 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:19.306 05:57:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:19.306 05:57:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:19.306 05:57:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:19.306 05:57:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:58:19.306 05:57:13 -- spdk/autotest.sh@32 -- # uname -s 00:58:19.306 05:57:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:58:19.306 05:57:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:58:19.306 05:57:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:58:19.306 05:57:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:58:19.306 05:57:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:58:19.306 05:57:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:58:19.306 05:57:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:58:19.306 05:57:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:58:19.306 05:57:13 -- spdk/autotest.sh@48 -- # udevadm_pid=54230 00:58:19.306 05:57:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:58:19.306 05:57:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:58:19.306 05:57:13 -- pm/common@17 -- # local monitor 00:58:19.306 05:57:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:58:19.306 05:57:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:58:19.306 05:57:13 -- pm/common@25 -- # sleep 1 00:58:19.306 05:57:13 -- pm/common@21 -- # date +%s 00:58:19.306 05:57:13 -- pm/common@21 -- # date +%s 00:58:19.306 05:57:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733723833 00:58:19.306 05:57:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733723833 00:58:19.306 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733723833_collect-vmstat.pm.log 00:58:19.306 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733723833_collect-cpu-load.pm.log 00:58:20.245 05:57:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:58:20.245 05:57:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:58:20.245 05:57:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:20.245 05:57:14 -- common/autotest_common.sh@10 -- # set +x 00:58:20.245 05:57:14 -- spdk/autotest.sh@59 -- # create_test_list 00:58:20.245 05:57:14 -- common/autotest_common.sh@752 -- # xtrace_disable 00:58:20.245 05:57:14 -- common/autotest_common.sh@10 -- # set +x 00:58:20.504 05:57:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:58:20.504 05:57:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:58:20.504 05:57:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:58:20.504 05:57:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:58:20.504 05:57:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:58:20.504 05:57:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:58:20.504 05:57:14 -- common/autotest_common.sh@1457 -- # uname 00:58:20.504 05:57:14 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:58:20.504 05:57:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:58:20.504 05:57:14 -- common/autotest_common.sh@1477 -- # uname 00:58:20.504 05:57:14 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:58:20.504 05:57:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:58:20.504 05:57:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:58:20.504 lcov: LCOV version 1.15 00:58:20.504 05:57:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:58:38.595 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:58:38.595 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:58:53.475 05:57:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:58:53.475 05:57:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:53.475 05:57:47 -- common/autotest_common.sh@10 -- # set +x 00:58:53.475 05:57:47 -- spdk/autotest.sh@78 -- # rm -f 00:58:53.475 05:57:47 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:58:54.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:58:54.413 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:58:54.413 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:58:54.413 05:57:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:58:54.413 05:57:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:58:54.413 05:57:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:58:54.413 05:57:48 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:58:54.413 05:57:48 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:58:54.413 05:57:48 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:58:54.413 05:57:48 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:58:54.413 05:57:48 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:58:54.413 05:57:48 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:58:54.413 05:57:48 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:58:54.413 05:57:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:58:54.413 05:57:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:58:54.413 05:57:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:58:54.413 05:57:48 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:58:54.413 05:57:48 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:58:54.413 05:57:48 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:58:54.413 05:57:48 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:58:54.413 05:57:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:58:54.413 05:57:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:58:54.413 05:57:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:58:54.413 05:57:48 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:58:54.413 05:57:48 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:58:54.413 05:57:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:58:54.413 05:57:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:58:54.413 05:57:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:58:54.413 05:57:48 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:58:54.413 05:57:48 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:58:54.413 05:57:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:58:54.413 05:57:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:58:54.413 05:57:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:58:54.413 05:57:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:58:54.413 05:57:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:58:54.413 05:57:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:58:54.413 05:57:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:58:54.413 05:57:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:58:54.413 05:57:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:58:54.413 No valid GPT data, bailing 00:58:54.413 05:57:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:58:54.413 05:57:48 -- scripts/common.sh@394 -- # pt= 00:58:54.414 05:57:48 -- scripts/common.sh@395 -- # return 1 00:58:54.414 05:57:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:58:54.414 1+0 records in 00:58:54.414 1+0 records out 00:58:54.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653592 s, 160 MB/s 00:58:54.414 05:57:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:58:54.414 05:57:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:58:54.414 05:57:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:58:54.414 05:57:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:58:54.414 05:57:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:58:54.673 No valid GPT data, bailing 00:58:54.674 05:57:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:58:54.674 05:57:49 -- scripts/common.sh@394 -- # pt= 00:58:54.674 05:57:49 -- scripts/common.sh@395 -- # return 1 00:58:54.674 05:57:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:58:54.674 1+0 records in 00:58:54.674 1+0 records out 00:58:54.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0069549 s, 151 MB/s 00:58:54.674 05:57:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:58:54.674 05:57:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:58:54.674 05:57:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:58:54.674 05:57:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:58:54.674 05:57:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:58:54.674 No valid GPT data, bailing 00:58:54.674 05:57:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:58:54.674 05:57:49 -- scripts/common.sh@394 -- # pt= 00:58:54.674 05:57:49 -- scripts/common.sh@395 -- # return 1 00:58:54.674 05:57:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:58:54.674 1+0 records in 00:58:54.674 1+0 records out 00:58:54.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00688102 s, 152 MB/s 00:58:54.674 05:57:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:58:54.674 05:57:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:58:54.674 05:57:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:58:54.674 05:57:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:58:54.674 05:57:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:58:54.674 No valid GPT data, bailing 00:58:54.674 05:57:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:58:54.674 05:57:49 -- scripts/common.sh@394 -- # pt= 00:58:54.674 05:57:49 -- scripts/common.sh@395 -- # return 1 00:58:54.674 05:57:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:58:54.674 1+0 records in 00:58:54.674 1+0 records out 00:58:54.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00568471 s, 184 MB/s 00:58:54.674 05:57:49 -- spdk/autotest.sh@105 -- # sync 00:58:54.933 05:57:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:58:54.933 05:57:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:58:54.933 05:57:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:58:58.228 05:57:52 -- spdk/autotest.sh@111 -- # uname -s 00:58:58.228 05:57:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:58:58.228 05:57:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:58:58.228 05:57:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:58:58.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:58:58.797 Hugepages 00:58:58.797 node hugesize free / total 00:58:58.797 node0 1048576kB 0 / 0 00:58:58.797 node0 2048kB 0 / 0 00:58:58.797 00:58:58.797 Type BDF Vendor Device NUMA Driver Device Block devices 00:58:58.797 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:58:59.056 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:58:59.056 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:58:59.056 05:57:53 -- spdk/autotest.sh@117 -- # uname -s 00:58:59.056 05:57:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:58:59.056 05:57:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:58:59.056 05:57:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:58:59.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:58:59.993 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:59:00.252 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:59:00.253 05:57:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:59:01.206 05:57:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:59:01.206 05:57:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:59:01.206 05:57:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:59:01.206 05:57:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:59:01.206 05:57:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:59:01.206 05:57:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:59:01.206 05:57:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:59:01.206 05:57:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:59:01.206 05:57:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:59:01.206 05:57:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:59:01.206 05:57:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:59:01.206 05:57:55 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:59:01.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:01.773 Waiting for block devices as requested 00:59:02.032 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:59:02.032 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:59:02.291 05:57:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:59:02.291 05:57:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:59:02.291 05:57:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:59:02.291 05:57:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:59:02.291 05:57:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:59:02.291 05:57:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:59:02.291 05:57:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:59:02.291 05:57:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:59:02.291 05:57:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:59:02.292 05:57:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:59:02.292 05:57:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:59:02.292 05:57:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:59:02.292 05:57:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:59:02.292 05:57:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:59:02.292 05:57:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:59:02.292 05:57:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:59:02.292 05:57:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:59:02.292 05:57:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:59:02.292 05:57:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:59:02.292 05:57:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:59:02.292 05:57:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:59:02.292 05:57:56 -- common/autotest_common.sh@1543 -- # continue 00:59:02.292 05:57:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:59:02.292 05:57:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:59:02.292 05:57:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:59:02.292 05:57:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:59:02.292 05:57:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:59:02.292 05:57:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:59:02.292 05:57:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:59:02.292 05:57:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:59:02.292 05:57:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:59:02.292 05:57:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:59:02.292 05:57:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:59:02.292 05:57:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:59:02.292 05:57:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:59:02.292 05:57:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:59:02.292 05:57:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:59:02.292 05:57:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:59:02.292 05:57:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:59:02.292 05:57:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:59:02.292 05:57:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:59:02.292 05:57:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:59:02.292 05:57:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:59:02.292 05:57:56 -- common/autotest_common.sh@1543 -- # continue 00:59:02.292 05:57:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:59:02.292 05:57:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:02.292 05:57:56 -- common/autotest_common.sh@10 -- # set +x 00:59:02.292 05:57:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:59:02.292 05:57:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:02.292 05:57:56 -- common/autotest_common.sh@10 -- # set +x 00:59:02.292 05:57:56 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:59:03.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:03.227 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:59:03.485 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:59:03.485 05:57:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:59:03.485 05:57:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:03.485 05:57:57 -- common/autotest_common.sh@10 -- # set +x 00:59:03.485 05:57:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:59:03.485 05:57:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:59:03.485 05:57:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:59:03.485 05:57:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:59:03.485 05:57:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:59:03.485 05:57:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:59:03.485 05:57:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:59:03.485 05:57:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:59:03.485 05:57:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:59:03.485 05:57:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:59:03.485 05:57:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:59:03.485 05:57:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:59:03.485 05:57:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:59:03.485 05:57:58 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:59:03.485 05:57:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:59:03.486 05:57:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:59:03.744 05:57:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:59:03.744 05:57:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:59:03.744 05:57:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:59:03.744 05:57:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:59:03.744 05:57:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:59:03.744 05:57:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:59:03.744 05:57:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:59:03.744 05:57:58 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:59:03.744 05:57:58 -- common/autotest_common.sh@1572 -- # return 0 00:59:03.744 05:57:58 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:59:03.744 05:57:58 -- common/autotest_common.sh@1580 -- # return 0 00:59:03.744 05:57:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:59:03.744 05:57:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:59:03.744 05:57:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:59:03.744 05:57:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:59:03.744 05:57:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:59:03.744 05:57:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:03.744 05:57:58 -- common/autotest_common.sh@10 -- # set +x 00:59:03.744 05:57:58 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:59:03.744 05:57:58 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:59:03.744 05:57:58 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:59:03.744 05:57:58 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:59:03.744 05:57:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:03.744 05:57:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:03.744 05:57:58 -- common/autotest_common.sh@10 -- # set +x 00:59:03.744 ************************************ 00:59:03.744 START TEST env 00:59:03.744 ************************************ 00:59:03.744 05:57:58 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:59:03.744 * Looking for test storage... 00:59:03.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:59:03.744 05:57:58 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:03.744 05:57:58 env -- common/autotest_common.sh@1711 -- # lcov --version 00:59:03.744 05:57:58 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:04.003 05:57:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:04.003 05:57:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:04.003 05:57:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:04.003 05:57:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:59:04.003 05:57:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:59:04.003 05:57:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:59:04.003 05:57:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:59:04.003 05:57:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:59:04.003 05:57:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:59:04.003 05:57:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:59:04.003 05:57:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:04.003 05:57:58 env -- scripts/common.sh@344 -- # case "$op" in 00:59:04.003 05:57:58 env -- scripts/common.sh@345 -- # : 1 00:59:04.003 05:57:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:04.003 05:57:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:04.003 05:57:58 env -- scripts/common.sh@365 -- # decimal 1 00:59:04.003 05:57:58 env -- scripts/common.sh@353 -- # local d=1 00:59:04.003 05:57:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:04.003 05:57:58 env -- scripts/common.sh@355 -- # echo 1 00:59:04.003 05:57:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:59:04.003 05:57:58 env -- scripts/common.sh@366 -- # decimal 2 00:59:04.003 05:57:58 env -- scripts/common.sh@353 -- # local d=2 00:59:04.003 05:57:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:04.003 05:57:58 env -- scripts/common.sh@355 -- # echo 2 00:59:04.003 05:57:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:59:04.003 05:57:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:04.003 05:57:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:04.003 05:57:58 env -- scripts/common.sh@368 -- # return 0 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:04.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:04.003 --rc genhtml_branch_coverage=1 00:59:04.003 --rc genhtml_function_coverage=1 00:59:04.003 --rc genhtml_legend=1 00:59:04.003 --rc geninfo_all_blocks=1 00:59:04.003 --rc geninfo_unexecuted_blocks=1 00:59:04.003 00:59:04.003 ' 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:04.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:04.003 --rc genhtml_branch_coverage=1 00:59:04.003 --rc genhtml_function_coverage=1 00:59:04.003 --rc genhtml_legend=1 00:59:04.003 --rc geninfo_all_blocks=1 00:59:04.003 --rc geninfo_unexecuted_blocks=1 00:59:04.003 00:59:04.003 ' 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:04.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:04.003 --rc genhtml_branch_coverage=1 00:59:04.003 --rc genhtml_function_coverage=1 00:59:04.003 --rc genhtml_legend=1 00:59:04.003 --rc geninfo_all_blocks=1 00:59:04.003 --rc geninfo_unexecuted_blocks=1 00:59:04.003 00:59:04.003 ' 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:04.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:04.003 --rc genhtml_branch_coverage=1 00:59:04.003 --rc genhtml_function_coverage=1 00:59:04.003 --rc genhtml_legend=1 00:59:04.003 --rc geninfo_all_blocks=1 00:59:04.003 --rc geninfo_unexecuted_blocks=1 00:59:04.003 00:59:04.003 ' 00:59:04.003 05:57:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:04.003 05:57:58 env -- common/autotest_common.sh@10 -- # set +x 00:59:04.003 ************************************ 00:59:04.003 START TEST env_memory 00:59:04.003 ************************************ 00:59:04.003 05:57:58 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:59:04.003 00:59:04.003 00:59:04.003 CUnit - A unit testing framework for C - Version 2.1-3 00:59:04.003 http://cunit.sourceforge.net/ 00:59:04.003 00:59:04.003 00:59:04.003 Suite: memory 00:59:04.003 Test: alloc and free memory map ...[2024-12-09 05:57:58.418712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:59:04.003 passed 00:59:04.003 Test: mem map translation ...[2024-12-09 05:57:58.437945] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:59:04.003 [2024-12-09 05:57:58.437967] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:59:04.003 [2024-12-09 05:57:58.438016] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:59:04.003 [2024-12-09 05:57:58.438024] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:59:04.003 passed 00:59:04.003 Test: mem map registration ...[2024-12-09 05:57:58.473222] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:59:04.003 [2024-12-09 05:57:58.473247] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:59:04.003 passed 00:59:04.003 Test: mem map adjacent registrations ...passed 00:59:04.003 00:59:04.003 Run Summary: Type Total Ran Passed Failed Inactive 00:59:04.003 suites 1 1 n/a 0 0 00:59:04.003 tests 4 4 4 0 0 00:59:04.003 asserts 152 152 152 0 n/a 00:59:04.003 00:59:04.003 Elapsed time = 0.130 seconds 00:59:04.003 00:59:04.003 real 0m0.150s 00:59:04.003 user 0m0.130s 00:59:04.003 sys 0m0.017s 00:59:04.003 05:57:58 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:04.003 05:57:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:59:04.003 ************************************ 00:59:04.003 END TEST env_memory 00:59:04.003 ************************************ 00:59:04.003 05:57:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:04.003 05:57:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:04.003 05:57:58 env -- common/autotest_common.sh@10 -- # set +x 00:59:04.263 ************************************ 00:59:04.263 START TEST env_vtophys 00:59:04.263 ************************************ 00:59:04.263 05:57:58 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:59:04.263 EAL: lib.eal log level changed from notice to debug 00:59:04.263 EAL: Detected lcore 0 as core 0 on socket 0 00:59:04.263 EAL: Detected lcore 1 as core 0 on socket 0 00:59:04.263 EAL: Detected lcore 2 as core 0 on socket 0 00:59:04.263 EAL: Detected lcore 3 as core 0 on socket 0 00:59:04.263 EAL: Detected lcore 4 as core 0 on socket 0 00:59:04.263 EAL: Detected lcore 5 as core 0 on socket 0 00:59:04.263 EAL: Detected lcore 6 as core 0 on socket 0 00:59:04.263 EAL: Detected lcore 7 as core 0 on socket 0 00:59:04.263 EAL: Detected lcore 8 as core 0 on socket 0 00:59:04.263 EAL: Detected lcore 9 as core 0 on socket 0 00:59:04.263 EAL: Maximum logical cores by configuration: 128 00:59:04.263 EAL: Detected CPU lcores: 10 00:59:04.263 EAL: Detected NUMA nodes: 1 00:59:04.263 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:59:04.263 EAL: Detected shared linkage of DPDK 00:59:04.263 EAL: No shared files mode enabled, IPC will be disabled 00:59:04.263 EAL: Selected IOVA mode 'PA' 00:59:04.263 EAL: Probing VFIO support... 00:59:04.263 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:59:04.263 EAL: VFIO modules not loaded, skipping VFIO support... 00:59:04.263 EAL: Ask a virtual area of 0x2e000 bytes 00:59:04.263 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:59:04.263 EAL: Setting up physically contiguous memory... 00:59:04.263 EAL: Setting maximum number of open files to 524288 00:59:04.263 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:59:04.263 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:59:04.263 EAL: Ask a virtual area of 0x61000 bytes 00:59:04.263 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:59:04.263 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:59:04.263 EAL: Ask a virtual area of 0x400000000 bytes 00:59:04.263 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:59:04.263 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:59:04.263 EAL: Ask a virtual area of 0x61000 bytes 00:59:04.263 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:59:04.263 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:59:04.263 EAL: Ask a virtual area of 0x400000000 bytes 00:59:04.263 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:59:04.263 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:59:04.263 EAL: Ask a virtual area of 0x61000 bytes 00:59:04.263 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:59:04.263 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:59:04.263 EAL: Ask a virtual area of 0x400000000 bytes 00:59:04.263 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:59:04.263 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:59:04.263 EAL: Ask a virtual area of 0x61000 bytes 00:59:04.263 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:59:04.263 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:59:04.263 EAL: Ask a virtual area of 0x400000000 bytes 00:59:04.263 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:59:04.263 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:59:04.263 EAL: Hugepages will be freed exactly as allocated. 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: TSC frequency is ~2490000 KHz 00:59:04.263 EAL: Main lcore 0 is ready (tid=7ff34181ca00;cpuset=[0]) 00:59:04.263 EAL: Trying to obtain current memory policy. 00:59:04.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.263 EAL: Restoring previous memory policy: 0 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was expanded by 2MB 00:59:04.263 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:59:04.263 EAL: No PCI address specified using 'addr=' in: bus=pci 00:59:04.263 EAL: Mem event callback 'spdk:(nil)' registered 00:59:04.263 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:59:04.263 00:59:04.263 00:59:04.263 CUnit - A unit testing framework for C - Version 2.1-3 00:59:04.263 http://cunit.sourceforge.net/ 00:59:04.263 00:59:04.263 00:59:04.263 Suite: components_suite 00:59:04.263 Test: vtophys_malloc_test ...passed 00:59:04.263 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:59:04.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.263 EAL: Restoring previous memory policy: 4 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was expanded by 4MB 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was shrunk by 4MB 00:59:04.263 EAL: Trying to obtain current memory policy. 00:59:04.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.263 EAL: Restoring previous memory policy: 4 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was expanded by 6MB 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was shrunk by 6MB 00:59:04.263 EAL: Trying to obtain current memory policy. 00:59:04.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.263 EAL: Restoring previous memory policy: 4 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was expanded by 10MB 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was shrunk by 10MB 00:59:04.263 EAL: Trying to obtain current memory policy. 00:59:04.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.263 EAL: Restoring previous memory policy: 4 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was expanded by 18MB 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was shrunk by 18MB 00:59:04.263 EAL: Trying to obtain current memory policy. 00:59:04.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.263 EAL: Restoring previous memory policy: 4 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was expanded by 34MB 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was shrunk by 34MB 00:59:04.263 EAL: Trying to obtain current memory policy. 00:59:04.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.263 EAL: Restoring previous memory policy: 4 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was expanded by 66MB 00:59:04.263 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.263 EAL: request: mp_malloc_sync 00:59:04.263 EAL: No shared files mode enabled, IPC is disabled 00:59:04.263 EAL: Heap on socket 0 was shrunk by 66MB 00:59:04.263 EAL: Trying to obtain current memory policy. 00:59:04.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.522 EAL: Restoring previous memory policy: 4 00:59:04.522 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.522 EAL: request: mp_malloc_sync 00:59:04.522 EAL: No shared files mode enabled, IPC is disabled 00:59:04.522 EAL: Heap on socket 0 was expanded by 130MB 00:59:04.522 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.522 EAL: request: mp_malloc_sync 00:59:04.522 EAL: No shared files mode enabled, IPC is disabled 00:59:04.522 EAL: Heap on socket 0 was shrunk by 130MB 00:59:04.522 EAL: Trying to obtain current memory policy. 00:59:04.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.522 EAL: Restoring previous memory policy: 4 00:59:04.522 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.522 EAL: request: mp_malloc_sync 00:59:04.522 EAL: No shared files mode enabled, IPC is disabled 00:59:04.522 EAL: Heap on socket 0 was expanded by 258MB 00:59:04.522 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.522 EAL: request: mp_malloc_sync 00:59:04.522 EAL: No shared files mode enabled, IPC is disabled 00:59:04.522 EAL: Heap on socket 0 was shrunk by 258MB 00:59:04.522 EAL: Trying to obtain current memory policy. 00:59:04.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:04.780 EAL: Restoring previous memory policy: 4 00:59:04.780 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.781 EAL: request: mp_malloc_sync 00:59:04.781 EAL: No shared files mode enabled, IPC is disabled 00:59:04.781 EAL: Heap on socket 0 was expanded by 514MB 00:59:04.781 EAL: Calling mem event callback 'spdk:(nil)' 00:59:04.781 EAL: request: mp_malloc_sync 00:59:04.781 EAL: No shared files mode enabled, IPC is disabled 00:59:04.781 EAL: Heap on socket 0 was shrunk by 514MB 00:59:04.781 EAL: Trying to obtain current memory policy. 00:59:04.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:59:05.047 EAL: Restoring previous memory policy: 4 00:59:05.047 EAL: Calling mem event callback 'spdk:(nil)' 00:59:05.047 EAL: request: mp_malloc_sync 00:59:05.047 EAL: No shared files mode enabled, IPC is disabled 00:59:05.047 EAL: Heap on socket 0 was expanded by 1026MB 00:59:05.307 EAL: Calling mem event callback 'spdk:(nil)' 00:59:05.307 passed 00:59:05.307 00:59:05.307 Run Summary: Type Total Ran Passed Failed Inactive 00:59:05.307 suites 1 1 n/a 0 0 00:59:05.307 tests 2 2 2 0 0 00:59:05.307 asserts 5575 5575 5575 0 n/a 00:59:05.307 00:59:05.307 Elapsed time = 1.008 seconds 00:59:05.307 EAL: request: mp_malloc_sync 00:59:05.307 EAL: No shared files mode enabled, IPC is disabled 00:59:05.307 EAL: Heap on socket 0 was shrunk by 1026MB 00:59:05.307 EAL: Calling mem event callback 'spdk:(nil)' 00:59:05.307 EAL: request: mp_malloc_sync 00:59:05.307 EAL: No shared files mode enabled, IPC is disabled 00:59:05.307 EAL: Heap on socket 0 was shrunk by 2MB 00:59:05.307 EAL: No shared files mode enabled, IPC is disabled 00:59:05.307 EAL: No shared files mode enabled, IPC is disabled 00:59:05.307 EAL: No shared files mode enabled, IPC is disabled 00:59:05.307 00:59:05.307 real 0m1.222s 00:59:05.307 user 0m0.662s 00:59:05.307 sys 0m0.430s 00:59:05.307 05:57:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:05.307 ************************************ 00:59:05.307 END TEST env_vtophys 00:59:05.307 05:57:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:59:05.307 ************************************ 00:59:05.307 05:57:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:59:05.307 05:57:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:05.307 05:57:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:05.307 05:57:59 env -- common/autotest_common.sh@10 -- # set +x 00:59:05.567 ************************************ 00:59:05.567 START TEST env_pci 00:59:05.567 ************************************ 00:59:05.567 05:57:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:59:05.567 00:59:05.567 00:59:05.567 CUnit - A unit testing framework for C - Version 2.1-3 00:59:05.567 http://cunit.sourceforge.net/ 00:59:05.567 00:59:05.567 00:59:05.567 Suite: pci 00:59:05.567 Test: pci_hook ...[2024-12-09 05:57:59.914475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56497 has claimed it 00:59:05.567 passed 00:59:05.567 00:59:05.567 Run Summary: Type Total Ran Passed Failed Inactive 00:59:05.567 suites 1 1 n/a 0 0 00:59:05.567 tests 1 1 1 0 0 00:59:05.567 asserts 25 25 25 0 n/a 00:59:05.567 00:59:05.567 Elapsed time = 0.003 seconds 00:59:05.567 EAL: Cannot find device (10000:00:01.0) 00:59:05.567 EAL: Failed to attach device on primary process 00:59:05.567 00:59:05.567 real 0m0.029s 00:59:05.567 user 0m0.014s 00:59:05.567 sys 0m0.015s 00:59:05.567 05:57:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:05.567 05:57:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:59:05.567 ************************************ 00:59:05.567 END TEST env_pci 00:59:05.567 ************************************ 00:59:05.567 05:57:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:59:05.567 05:57:59 env -- env/env.sh@15 -- # uname 00:59:05.567 05:57:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:59:05.567 05:57:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:59:05.567 05:57:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:59:05.567 05:57:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:59:05.567 05:57:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:05.567 05:57:59 env -- common/autotest_common.sh@10 -- # set +x 00:59:05.567 ************************************ 00:59:05.567 START TEST env_dpdk_post_init 00:59:05.567 ************************************ 00:59:05.567 05:58:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:59:05.567 EAL: Detected CPU lcores: 10 00:59:05.567 EAL: Detected NUMA nodes: 1 00:59:05.567 EAL: Detected shared linkage of DPDK 00:59:05.567 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:59:05.567 EAL: Selected IOVA mode 'PA' 00:59:05.828 TELEMETRY: No legacy callbacks, legacy socket not created 00:59:05.828 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:59:05.828 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:59:05.828 Starting DPDK initialization... 00:59:05.828 Starting SPDK post initialization... 00:59:05.828 SPDK NVMe probe 00:59:05.828 Attaching to 0000:00:10.0 00:59:05.828 Attaching to 0000:00:11.0 00:59:05.828 Attached to 0000:00:10.0 00:59:05.828 Attached to 0000:00:11.0 00:59:05.828 Cleaning up... 00:59:05.828 00:59:05.828 real 0m0.205s 00:59:05.828 user 0m0.057s 00:59:05.828 sys 0m0.048s 00:59:05.828 05:58:00 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:05.828 05:58:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:59:05.828 ************************************ 00:59:05.828 END TEST env_dpdk_post_init 00:59:05.828 ************************************ 00:59:05.828 05:58:00 env -- env/env.sh@26 -- # uname 00:59:05.828 05:58:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:59:05.828 05:58:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:59:05.828 05:58:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:05.828 05:58:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:05.828 05:58:00 env -- common/autotest_common.sh@10 -- # set +x 00:59:05.828 ************************************ 00:59:05.828 START TEST env_mem_callbacks 00:59:05.828 ************************************ 00:59:05.828 05:58:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:59:05.828 EAL: Detected CPU lcores: 10 00:59:05.828 EAL: Detected NUMA nodes: 1 00:59:05.828 EAL: Detected shared linkage of DPDK 00:59:05.828 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:59:05.828 EAL: Selected IOVA mode 'PA' 00:59:06.088 TELEMETRY: No legacy callbacks, legacy socket not created 00:59:06.089 00:59:06.089 00:59:06.089 CUnit - A unit testing framework for C - Version 2.1-3 00:59:06.089 http://cunit.sourceforge.net/ 00:59:06.089 00:59:06.089 00:59:06.089 Suite: memory 00:59:06.089 Test: test ... 00:59:06.089 register 0x200000200000 2097152 00:59:06.089 malloc 3145728 00:59:06.089 register 0x200000400000 4194304 00:59:06.089 buf 0x200000500000 len 3145728 PASSED 00:59:06.089 malloc 64 00:59:06.089 buf 0x2000004fff40 len 64 PASSED 00:59:06.089 malloc 4194304 00:59:06.089 register 0x200000800000 6291456 00:59:06.089 buf 0x200000a00000 len 4194304 PASSED 00:59:06.089 free 0x200000500000 3145728 00:59:06.089 free 0x2000004fff40 64 00:59:06.089 unregister 0x200000400000 4194304 PASSED 00:59:06.089 free 0x200000a00000 4194304 00:59:06.089 unregister 0x200000800000 6291456 PASSED 00:59:06.089 malloc 8388608 00:59:06.089 register 0x200000400000 10485760 00:59:06.089 buf 0x200000600000 len 8388608 PASSED 00:59:06.089 free 0x200000600000 8388608 00:59:06.089 unregister 0x200000400000 10485760 PASSED 00:59:06.089 passed 00:59:06.089 00:59:06.089 Run Summary: Type Total Ran Passed Failed Inactive 00:59:06.089 suites 1 1 n/a 0 0 00:59:06.089 tests 1 1 1 0 0 00:59:06.089 asserts 15 15 15 0 n/a 00:59:06.089 00:59:06.089 Elapsed time = 0.010 seconds 00:59:06.089 00:59:06.089 real 0m0.156s 00:59:06.089 user 0m0.020s 00:59:06.089 sys 0m0.035s 00:59:06.089 05:58:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:06.089 ************************************ 00:59:06.089 END TEST env_mem_callbacks 00:59:06.089 ************************************ 00:59:06.089 05:58:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:59:06.089 00:59:06.089 real 0m2.402s 00:59:06.089 user 0m1.123s 00:59:06.089 sys 0m0.945s 00:59:06.089 05:58:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:06.089 ************************************ 00:59:06.089 END TEST env 00:59:06.089 ************************************ 00:59:06.089 05:58:00 env -- common/autotest_common.sh@10 -- # set +x 00:59:06.089 05:58:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:59:06.089 05:58:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:06.089 05:58:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:06.089 05:58:00 -- common/autotest_common.sh@10 -- # set +x 00:59:06.089 ************************************ 00:59:06.089 START TEST rpc 00:59:06.089 ************************************ 00:59:06.089 05:58:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:59:06.349 * Looking for test storage... 00:59:06.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:59:06.349 05:58:00 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:06.349 05:58:00 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:59:06.349 05:58:00 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:06.349 05:58:00 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:06.349 05:58:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:06.349 05:58:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:06.349 05:58:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:06.349 05:58:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:59:06.349 05:58:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:59:06.349 05:58:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:59:06.349 05:58:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:59:06.349 05:58:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:59:06.349 05:58:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:59:06.349 05:58:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:59:06.349 05:58:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:06.349 05:58:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:59:06.349 05:58:00 rpc -- scripts/common.sh@345 -- # : 1 00:59:06.349 05:58:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:06.349 05:58:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:06.349 05:58:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:59:06.349 05:58:00 rpc -- scripts/common.sh@353 -- # local d=1 00:59:06.349 05:58:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:06.349 05:58:00 rpc -- scripts/common.sh@355 -- # echo 1 00:59:06.349 05:58:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:59:06.349 05:58:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:59:06.349 05:58:00 rpc -- scripts/common.sh@353 -- # local d=2 00:59:06.350 05:58:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:06.350 05:58:00 rpc -- scripts/common.sh@355 -- # echo 2 00:59:06.350 05:58:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:59:06.350 05:58:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:06.350 05:58:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:06.350 05:58:00 rpc -- scripts/common.sh@368 -- # return 0 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:06.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:06.350 --rc genhtml_branch_coverage=1 00:59:06.350 --rc genhtml_function_coverage=1 00:59:06.350 --rc genhtml_legend=1 00:59:06.350 --rc geninfo_all_blocks=1 00:59:06.350 --rc geninfo_unexecuted_blocks=1 00:59:06.350 00:59:06.350 ' 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:06.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:06.350 --rc genhtml_branch_coverage=1 00:59:06.350 --rc genhtml_function_coverage=1 00:59:06.350 --rc genhtml_legend=1 00:59:06.350 --rc geninfo_all_blocks=1 00:59:06.350 --rc geninfo_unexecuted_blocks=1 00:59:06.350 00:59:06.350 ' 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:06.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:06.350 --rc genhtml_branch_coverage=1 00:59:06.350 --rc genhtml_function_coverage=1 00:59:06.350 --rc genhtml_legend=1 00:59:06.350 --rc geninfo_all_blocks=1 00:59:06.350 --rc geninfo_unexecuted_blocks=1 00:59:06.350 00:59:06.350 ' 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:06.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:06.350 --rc genhtml_branch_coverage=1 00:59:06.350 --rc genhtml_function_coverage=1 00:59:06.350 --rc genhtml_legend=1 00:59:06.350 --rc geninfo_all_blocks=1 00:59:06.350 --rc geninfo_unexecuted_blocks=1 00:59:06.350 00:59:06.350 ' 00:59:06.350 05:58:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56620 00:59:06.350 05:58:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:59:06.350 05:58:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:59:06.350 05:58:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56620 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 56620 ']' 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:06.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:06.350 05:58:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:59:06.350 [2024-12-09 05:58:00.881979] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:06.350 [2024-12-09 05:58:00.882422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56620 ] 00:59:06.610 [2024-12-09 05:58:01.030782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:06.610 [2024-12-09 05:58:01.069932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:59:06.610 [2024-12-09 05:58:01.069973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56620' to capture a snapshot of events at runtime. 00:59:06.610 [2024-12-09 05:58:01.069997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:06.610 [2024-12-09 05:58:01.070006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:06.610 [2024-12-09 05:58:01.070012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56620 for offline analysis/debug. 00:59:06.610 [2024-12-09 05:58:01.070307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:06.610 [2024-12-09 05:58:01.125085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:07.177 05:58:01 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:07.177 05:58:01 rpc -- common/autotest_common.sh@868 -- # return 0 00:59:07.177 05:58:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:59:07.177 05:58:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:59:07.177 05:58:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:59:07.177 05:58:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:59:07.178 05:58:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:07.178 05:58:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:07.178 05:58:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:59:07.178 ************************************ 00:59:07.178 START TEST rpc_integrity 00:59:07.178 ************************************ 00:59:07.178 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:59:07.178 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:59:07.178 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.178 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:07.178 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.178 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:59:07.178 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:59:07.437 { 00:59:07.437 "name": "Malloc0", 00:59:07.437 "aliases": [ 00:59:07.437 "461adf26-c14f-44c2-b9bc-1f2ee1351644" 00:59:07.437 ], 00:59:07.437 "product_name": "Malloc disk", 00:59:07.437 "block_size": 512, 00:59:07.437 "num_blocks": 16384, 00:59:07.437 "uuid": "461adf26-c14f-44c2-b9bc-1f2ee1351644", 00:59:07.437 "assigned_rate_limits": { 00:59:07.437 "rw_ios_per_sec": 0, 00:59:07.437 "rw_mbytes_per_sec": 0, 00:59:07.437 "r_mbytes_per_sec": 0, 00:59:07.437 "w_mbytes_per_sec": 0 00:59:07.437 }, 00:59:07.437 "claimed": false, 00:59:07.437 "zoned": false, 00:59:07.437 "supported_io_types": { 00:59:07.437 "read": true, 00:59:07.437 "write": true, 00:59:07.437 "unmap": true, 00:59:07.437 "flush": true, 00:59:07.437 "reset": true, 00:59:07.437 "nvme_admin": false, 00:59:07.437 "nvme_io": false, 00:59:07.437 "nvme_io_md": false, 00:59:07.437 "write_zeroes": true, 00:59:07.437 "zcopy": true, 00:59:07.437 "get_zone_info": false, 00:59:07.437 "zone_management": false, 00:59:07.437 "zone_append": false, 00:59:07.437 "compare": false, 00:59:07.437 "compare_and_write": false, 00:59:07.437 "abort": true, 00:59:07.437 "seek_hole": false, 00:59:07.437 "seek_data": false, 00:59:07.437 "copy": true, 00:59:07.437 "nvme_iov_md": false 00:59:07.437 }, 00:59:07.437 "memory_domains": [ 00:59:07.437 { 00:59:07.437 "dma_device_id": "system", 00:59:07.437 "dma_device_type": 1 00:59:07.437 }, 00:59:07.437 { 00:59:07.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:59:07.437 "dma_device_type": 2 00:59:07.437 } 00:59:07.437 ], 00:59:07.437 "driver_specific": {} 00:59:07.437 } 00:59:07.437 ]' 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:07.437 [2024-12-09 05:58:01.877807] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:59:07.437 [2024-12-09 05:58:01.877843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:59:07.437 [2024-12-09 05:58:01.877872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x137fcb0 00:59:07.437 [2024-12-09 05:58:01.877880] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:59:07.437 [2024-12-09 05:58:01.879049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:59:07.437 [2024-12-09 05:58:01.879073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:59:07.437 Passthru0 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:07.437 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.437 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:59:07.437 { 00:59:07.437 "name": "Malloc0", 00:59:07.437 "aliases": [ 00:59:07.437 "461adf26-c14f-44c2-b9bc-1f2ee1351644" 00:59:07.437 ], 00:59:07.437 "product_name": "Malloc disk", 00:59:07.437 "block_size": 512, 00:59:07.437 "num_blocks": 16384, 00:59:07.437 "uuid": "461adf26-c14f-44c2-b9bc-1f2ee1351644", 00:59:07.438 "assigned_rate_limits": { 00:59:07.438 "rw_ios_per_sec": 0, 00:59:07.438 "rw_mbytes_per_sec": 0, 00:59:07.438 "r_mbytes_per_sec": 0, 00:59:07.438 "w_mbytes_per_sec": 0 00:59:07.438 }, 00:59:07.438 "claimed": true, 00:59:07.438 "claim_type": "exclusive_write", 00:59:07.438 "zoned": false, 00:59:07.438 "supported_io_types": { 00:59:07.438 "read": true, 00:59:07.438 "write": true, 00:59:07.438 "unmap": true, 00:59:07.438 "flush": true, 00:59:07.438 "reset": true, 00:59:07.438 "nvme_admin": false, 00:59:07.438 "nvme_io": false, 00:59:07.438 "nvme_io_md": false, 00:59:07.438 "write_zeroes": true, 00:59:07.438 "zcopy": true, 00:59:07.438 "get_zone_info": false, 00:59:07.438 "zone_management": false, 00:59:07.438 "zone_append": false, 00:59:07.438 "compare": false, 00:59:07.438 "compare_and_write": false, 00:59:07.438 "abort": true, 00:59:07.438 "seek_hole": false, 00:59:07.438 "seek_data": false, 00:59:07.438 "copy": true, 00:59:07.438 "nvme_iov_md": false 00:59:07.438 }, 00:59:07.438 "memory_domains": [ 00:59:07.438 { 00:59:07.438 "dma_device_id": "system", 00:59:07.438 "dma_device_type": 1 00:59:07.438 }, 00:59:07.438 { 00:59:07.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:59:07.438 "dma_device_type": 2 00:59:07.438 } 00:59:07.438 ], 00:59:07.438 "driver_specific": {} 00:59:07.438 }, 00:59:07.438 { 00:59:07.438 "name": "Passthru0", 00:59:07.438 "aliases": [ 00:59:07.438 "4d564b39-4b72-51cb-a953-fe5791853292" 00:59:07.438 ], 00:59:07.438 "product_name": "passthru", 00:59:07.438 "block_size": 512, 00:59:07.438 "num_blocks": 16384, 00:59:07.438 "uuid": "4d564b39-4b72-51cb-a953-fe5791853292", 00:59:07.438 "assigned_rate_limits": { 00:59:07.438 "rw_ios_per_sec": 0, 00:59:07.438 "rw_mbytes_per_sec": 0, 00:59:07.438 "r_mbytes_per_sec": 0, 00:59:07.438 "w_mbytes_per_sec": 0 00:59:07.438 }, 00:59:07.438 "claimed": false, 00:59:07.438 "zoned": false, 00:59:07.438 "supported_io_types": { 00:59:07.438 "read": true, 00:59:07.438 "write": true, 00:59:07.438 "unmap": true, 00:59:07.438 "flush": true, 00:59:07.438 "reset": true, 00:59:07.438 "nvme_admin": false, 00:59:07.438 "nvme_io": false, 00:59:07.438 "nvme_io_md": false, 00:59:07.438 "write_zeroes": true, 00:59:07.438 "zcopy": true, 00:59:07.438 "get_zone_info": false, 00:59:07.438 "zone_management": false, 00:59:07.438 "zone_append": false, 00:59:07.438 "compare": false, 00:59:07.438 "compare_and_write": false, 00:59:07.438 "abort": true, 00:59:07.438 "seek_hole": false, 00:59:07.438 "seek_data": false, 00:59:07.438 "copy": true, 00:59:07.438 "nvme_iov_md": false 00:59:07.438 }, 00:59:07.438 "memory_domains": [ 00:59:07.438 { 00:59:07.438 "dma_device_id": "system", 00:59:07.438 "dma_device_type": 1 00:59:07.438 }, 00:59:07.438 { 00:59:07.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:59:07.438 "dma_device_type": 2 00:59:07.438 } 00:59:07.438 ], 00:59:07.438 "driver_specific": { 00:59:07.438 "passthru": { 00:59:07.438 "name": "Passthru0", 00:59:07.438 "base_bdev_name": "Malloc0" 00:59:07.438 } 00:59:07.438 } 00:59:07.438 } 00:59:07.438 ]' 00:59:07.438 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:59:07.438 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:59:07.438 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:59:07.438 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.438 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:07.438 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.438 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:59:07.438 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.438 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:07.438 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.438 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:59:07.438 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.438 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:07.438 05:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.438 05:58:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:59:07.438 05:58:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:59:07.698 05:58:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:59:07.698 00:59:07.698 real 0m0.322s 00:59:07.698 user 0m0.194s 00:59:07.698 sys 0m0.063s 00:59:07.698 ************************************ 00:59:07.698 END TEST rpc_integrity 00:59:07.698 ************************************ 00:59:07.698 05:58:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:07.698 05:58:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:07.698 05:58:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:59:07.698 05:58:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:07.698 05:58:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:07.698 05:58:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:59:07.698 ************************************ 00:59:07.698 START TEST rpc_plugins 00:59:07.698 ************************************ 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:59:07.698 { 00:59:07.698 "name": "Malloc1", 00:59:07.698 "aliases": [ 00:59:07.698 "30495c5d-e066-4471-853d-1243ffdf2f8b" 00:59:07.698 ], 00:59:07.698 "product_name": "Malloc disk", 00:59:07.698 "block_size": 4096, 00:59:07.698 "num_blocks": 256, 00:59:07.698 "uuid": "30495c5d-e066-4471-853d-1243ffdf2f8b", 00:59:07.698 "assigned_rate_limits": { 00:59:07.698 "rw_ios_per_sec": 0, 00:59:07.698 "rw_mbytes_per_sec": 0, 00:59:07.698 "r_mbytes_per_sec": 0, 00:59:07.698 "w_mbytes_per_sec": 0 00:59:07.698 }, 00:59:07.698 "claimed": false, 00:59:07.698 "zoned": false, 00:59:07.698 "supported_io_types": { 00:59:07.698 "read": true, 00:59:07.698 "write": true, 00:59:07.698 "unmap": true, 00:59:07.698 "flush": true, 00:59:07.698 "reset": true, 00:59:07.698 "nvme_admin": false, 00:59:07.698 "nvme_io": false, 00:59:07.698 "nvme_io_md": false, 00:59:07.698 "write_zeroes": true, 00:59:07.698 "zcopy": true, 00:59:07.698 "get_zone_info": false, 00:59:07.698 "zone_management": false, 00:59:07.698 "zone_append": false, 00:59:07.698 "compare": false, 00:59:07.698 "compare_and_write": false, 00:59:07.698 "abort": true, 00:59:07.698 "seek_hole": false, 00:59:07.698 "seek_data": false, 00:59:07.698 "copy": true, 00:59:07.698 "nvme_iov_md": false 00:59:07.698 }, 00:59:07.698 "memory_domains": [ 00:59:07.698 { 00:59:07.698 "dma_device_id": "system", 00:59:07.698 "dma_device_type": 1 00:59:07.698 }, 00:59:07.698 { 00:59:07.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:59:07.698 "dma_device_type": 2 00:59:07.698 } 00:59:07.698 ], 00:59:07.698 "driver_specific": {} 00:59:07.698 } 00:59:07.698 ]' 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:59:07.698 05:58:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:59:07.698 00:59:07.698 real 0m0.147s 00:59:07.698 user 0m0.085s 00:59:07.698 sys 0m0.026s 00:59:07.698 ************************************ 00:59:07.698 END TEST rpc_plugins 00:59:07.698 ************************************ 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:07.698 05:58:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:59:07.958 05:58:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:59:07.958 05:58:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:07.958 05:58:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:07.958 05:58:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:59:07.958 ************************************ 00:59:07.958 START TEST rpc_trace_cmd_test 00:59:07.958 ************************************ 00:59:07.958 05:58:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:59:07.958 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:59:07.958 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:59:07.958 05:58:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.958 05:58:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:59:07.958 05:58:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.958 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:59:07.958 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56620", 00:59:07.958 "tpoint_group_mask": "0x8", 00:59:07.958 "iscsi_conn": { 00:59:07.958 "mask": "0x2", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "scsi": { 00:59:07.958 "mask": "0x4", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "bdev": { 00:59:07.958 "mask": "0x8", 00:59:07.958 "tpoint_mask": "0xffffffffffffffff" 00:59:07.958 }, 00:59:07.958 "nvmf_rdma": { 00:59:07.958 "mask": "0x10", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "nvmf_tcp": { 00:59:07.958 "mask": "0x20", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "ftl": { 00:59:07.958 "mask": "0x40", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "blobfs": { 00:59:07.958 "mask": "0x80", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "dsa": { 00:59:07.958 "mask": "0x200", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "thread": { 00:59:07.958 "mask": "0x400", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "nvme_pcie": { 00:59:07.958 "mask": "0x800", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "iaa": { 00:59:07.958 "mask": "0x1000", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "nvme_tcp": { 00:59:07.958 "mask": "0x2000", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "bdev_nvme": { 00:59:07.958 "mask": "0x4000", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "sock": { 00:59:07.958 "mask": "0x8000", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "blob": { 00:59:07.958 "mask": "0x10000", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.958 }, 00:59:07.958 "bdev_raid": { 00:59:07.958 "mask": "0x20000", 00:59:07.958 "tpoint_mask": "0x0" 00:59:07.959 }, 00:59:07.959 "scheduler": { 00:59:07.959 "mask": "0x40000", 00:59:07.959 "tpoint_mask": "0x0" 00:59:07.959 } 00:59:07.959 }' 00:59:07.959 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:59:07.959 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:59:07.959 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:59:07.959 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:59:07.959 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:59:07.959 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:59:07.959 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:59:07.959 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:59:07.959 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:59:08.219 05:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:59:08.219 00:59:08.219 real 0m0.227s 00:59:08.219 user 0m0.183s 00:59:08.219 sys 0m0.034s 00:59:08.219 ************************************ 00:59:08.219 END TEST rpc_trace_cmd_test 00:59:08.219 ************************************ 00:59:08.219 05:58:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:08.219 05:58:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:59:08.219 05:58:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:59:08.219 05:58:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:59:08.219 05:58:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:59:08.219 05:58:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:08.219 05:58:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:08.219 05:58:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:59:08.219 ************************************ 00:59:08.219 START TEST rpc_daemon_integrity 00:59:08.219 ************************************ 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:59:08.219 { 00:59:08.219 "name": "Malloc2", 00:59:08.219 "aliases": [ 00:59:08.219 "280f3d7e-6378-4da3-b4b9-3ff43fcae482" 00:59:08.219 ], 00:59:08.219 "product_name": "Malloc disk", 00:59:08.219 "block_size": 512, 00:59:08.219 "num_blocks": 16384, 00:59:08.219 "uuid": "280f3d7e-6378-4da3-b4b9-3ff43fcae482", 00:59:08.219 "assigned_rate_limits": { 00:59:08.219 "rw_ios_per_sec": 0, 00:59:08.219 "rw_mbytes_per_sec": 0, 00:59:08.219 "r_mbytes_per_sec": 0, 00:59:08.219 "w_mbytes_per_sec": 0 00:59:08.219 }, 00:59:08.219 "claimed": false, 00:59:08.219 "zoned": false, 00:59:08.219 "supported_io_types": { 00:59:08.219 "read": true, 00:59:08.219 "write": true, 00:59:08.219 "unmap": true, 00:59:08.219 "flush": true, 00:59:08.219 "reset": true, 00:59:08.219 "nvme_admin": false, 00:59:08.219 "nvme_io": false, 00:59:08.219 "nvme_io_md": false, 00:59:08.219 "write_zeroes": true, 00:59:08.219 "zcopy": true, 00:59:08.219 "get_zone_info": false, 00:59:08.219 "zone_management": false, 00:59:08.219 "zone_append": false, 00:59:08.219 "compare": false, 00:59:08.219 "compare_and_write": false, 00:59:08.219 "abort": true, 00:59:08.219 "seek_hole": false, 00:59:08.219 "seek_data": false, 00:59:08.219 "copy": true, 00:59:08.219 "nvme_iov_md": false 00:59:08.219 }, 00:59:08.219 "memory_domains": [ 00:59:08.219 { 00:59:08.219 "dma_device_id": "system", 00:59:08.219 "dma_device_type": 1 00:59:08.219 }, 00:59:08.219 { 00:59:08.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:59:08.219 "dma_device_type": 2 00:59:08.219 } 00:59:08.219 ], 00:59:08.219 "driver_specific": {} 00:59:08.219 } 00:59:08.219 ]' 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:08.219 [2024-12-09 05:58:02.780991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:59:08.219 [2024-12-09 05:58:02.781024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:59:08.219 [2024-12-09 05:58:02.781036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13e3270 00:59:08.219 [2024-12-09 05:58:02.781044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:59:08.219 [2024-12-09 05:58:02.782012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:59:08.219 [2024-12-09 05:58:02.782031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:59:08.219 Passthru0 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:08.219 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:08.479 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:08.479 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:59:08.479 { 00:59:08.479 "name": "Malloc2", 00:59:08.479 "aliases": [ 00:59:08.479 "280f3d7e-6378-4da3-b4b9-3ff43fcae482" 00:59:08.479 ], 00:59:08.479 "product_name": "Malloc disk", 00:59:08.479 "block_size": 512, 00:59:08.479 "num_blocks": 16384, 00:59:08.479 "uuid": "280f3d7e-6378-4da3-b4b9-3ff43fcae482", 00:59:08.479 "assigned_rate_limits": { 00:59:08.479 "rw_ios_per_sec": 0, 00:59:08.479 "rw_mbytes_per_sec": 0, 00:59:08.479 "r_mbytes_per_sec": 0, 00:59:08.479 "w_mbytes_per_sec": 0 00:59:08.479 }, 00:59:08.479 "claimed": true, 00:59:08.479 "claim_type": "exclusive_write", 00:59:08.479 "zoned": false, 00:59:08.479 "supported_io_types": { 00:59:08.479 "read": true, 00:59:08.479 "write": true, 00:59:08.479 "unmap": true, 00:59:08.479 "flush": true, 00:59:08.479 "reset": true, 00:59:08.479 "nvme_admin": false, 00:59:08.479 "nvme_io": false, 00:59:08.479 "nvme_io_md": false, 00:59:08.479 "write_zeroes": true, 00:59:08.479 "zcopy": true, 00:59:08.479 "get_zone_info": false, 00:59:08.479 "zone_management": false, 00:59:08.479 "zone_append": false, 00:59:08.479 "compare": false, 00:59:08.479 "compare_and_write": false, 00:59:08.479 "abort": true, 00:59:08.479 "seek_hole": false, 00:59:08.479 "seek_data": false, 00:59:08.479 "copy": true, 00:59:08.479 "nvme_iov_md": false 00:59:08.479 }, 00:59:08.479 "memory_domains": [ 00:59:08.479 { 00:59:08.479 "dma_device_id": "system", 00:59:08.479 "dma_device_type": 1 00:59:08.479 }, 00:59:08.479 { 00:59:08.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:59:08.479 "dma_device_type": 2 00:59:08.479 } 00:59:08.479 ], 00:59:08.479 "driver_specific": {} 00:59:08.479 }, 00:59:08.479 { 00:59:08.479 "name": "Passthru0", 00:59:08.479 "aliases": [ 00:59:08.479 "6c1c135e-2bc0-5d40-9256-24c01ca0120a" 00:59:08.479 ], 00:59:08.479 "product_name": "passthru", 00:59:08.479 "block_size": 512, 00:59:08.479 "num_blocks": 16384, 00:59:08.479 "uuid": "6c1c135e-2bc0-5d40-9256-24c01ca0120a", 00:59:08.479 "assigned_rate_limits": { 00:59:08.479 "rw_ios_per_sec": 0, 00:59:08.479 "rw_mbytes_per_sec": 0, 00:59:08.479 "r_mbytes_per_sec": 0, 00:59:08.479 "w_mbytes_per_sec": 0 00:59:08.479 }, 00:59:08.479 "claimed": false, 00:59:08.479 "zoned": false, 00:59:08.479 "supported_io_types": { 00:59:08.479 "read": true, 00:59:08.479 "write": true, 00:59:08.480 "unmap": true, 00:59:08.480 "flush": true, 00:59:08.480 "reset": true, 00:59:08.480 "nvme_admin": false, 00:59:08.480 "nvme_io": false, 00:59:08.480 "nvme_io_md": false, 00:59:08.480 "write_zeroes": true, 00:59:08.480 "zcopy": true, 00:59:08.480 "get_zone_info": false, 00:59:08.480 "zone_management": false, 00:59:08.480 "zone_append": false, 00:59:08.480 "compare": false, 00:59:08.480 "compare_and_write": false, 00:59:08.480 "abort": true, 00:59:08.480 "seek_hole": false, 00:59:08.480 "seek_data": false, 00:59:08.480 "copy": true, 00:59:08.480 "nvme_iov_md": false 00:59:08.480 }, 00:59:08.480 "memory_domains": [ 00:59:08.480 { 00:59:08.480 "dma_device_id": "system", 00:59:08.480 "dma_device_type": 1 00:59:08.480 }, 00:59:08.480 { 00:59:08.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:59:08.480 "dma_device_type": 2 00:59:08.480 } 00:59:08.480 ], 00:59:08.480 "driver_specific": { 00:59:08.480 "passthru": { 00:59:08.480 "name": "Passthru0", 00:59:08.480 "base_bdev_name": "Malloc2" 00:59:08.480 } 00:59:08.480 } 00:59:08.480 } 00:59:08.480 ]' 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:59:08.480 00:59:08.480 real 0m0.306s 00:59:08.480 user 0m0.192s 00:59:08.480 sys 0m0.053s 00:59:08.480 ************************************ 00:59:08.480 END TEST rpc_daemon_integrity 00:59:08.480 ************************************ 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:08.480 05:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:59:08.480 05:58:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:59:08.480 05:58:02 rpc -- rpc/rpc.sh@84 -- # killprocess 56620 00:59:08.480 05:58:02 rpc -- common/autotest_common.sh@954 -- # '[' -z 56620 ']' 00:59:08.480 05:58:02 rpc -- common/autotest_common.sh@958 -- # kill -0 56620 00:59:08.480 05:58:02 rpc -- common/autotest_common.sh@959 -- # uname 00:59:08.480 05:58:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:08.480 05:58:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56620 00:59:08.480 05:58:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:08.480 killing process with pid 56620 00:59:08.480 05:58:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:08.480 05:58:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56620' 00:59:08.480 05:58:03 rpc -- common/autotest_common.sh@973 -- # kill 56620 00:59:08.480 05:58:03 rpc -- common/autotest_common.sh@978 -- # wait 56620 00:59:09.051 00:59:09.051 real 0m2.755s 00:59:09.051 user 0m3.388s 00:59:09.051 sys 0m0.798s 00:59:09.051 05:58:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:09.051 ************************************ 00:59:09.051 END TEST rpc 00:59:09.051 05:58:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:59:09.051 ************************************ 00:59:09.051 05:58:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:59:09.051 05:58:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:09.051 05:58:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:09.051 05:58:03 -- common/autotest_common.sh@10 -- # set +x 00:59:09.051 ************************************ 00:59:09.051 START TEST skip_rpc 00:59:09.051 ************************************ 00:59:09.051 05:58:03 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:59:09.051 * Looking for test storage... 00:59:09.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:59:09.051 05:58:03 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:09.051 05:58:03 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:59:09.051 05:58:03 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:09.315 05:58:03 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:09.315 05:58:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:59:09.315 05:58:03 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:09.315 05:58:03 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:09.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:09.315 --rc genhtml_branch_coverage=1 00:59:09.315 --rc genhtml_function_coverage=1 00:59:09.315 --rc genhtml_legend=1 00:59:09.315 --rc geninfo_all_blocks=1 00:59:09.315 --rc geninfo_unexecuted_blocks=1 00:59:09.315 00:59:09.315 ' 00:59:09.315 05:58:03 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:09.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:09.315 --rc genhtml_branch_coverage=1 00:59:09.315 --rc genhtml_function_coverage=1 00:59:09.315 --rc genhtml_legend=1 00:59:09.315 --rc geninfo_all_blocks=1 00:59:09.315 --rc geninfo_unexecuted_blocks=1 00:59:09.315 00:59:09.315 ' 00:59:09.315 05:58:03 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:09.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:09.315 --rc genhtml_branch_coverage=1 00:59:09.315 --rc genhtml_function_coverage=1 00:59:09.315 --rc genhtml_legend=1 00:59:09.315 --rc geninfo_all_blocks=1 00:59:09.315 --rc geninfo_unexecuted_blocks=1 00:59:09.315 00:59:09.315 ' 00:59:09.315 05:58:03 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:09.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:09.315 --rc genhtml_branch_coverage=1 00:59:09.315 --rc genhtml_function_coverage=1 00:59:09.315 --rc genhtml_legend=1 00:59:09.315 --rc geninfo_all_blocks=1 00:59:09.315 --rc geninfo_unexecuted_blocks=1 00:59:09.315 00:59:09.315 ' 00:59:09.315 05:58:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:59:09.315 05:58:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:59:09.315 05:58:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:59:09.315 05:58:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:09.315 05:58:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:09.315 05:58:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:59:09.315 ************************************ 00:59:09.315 START TEST skip_rpc 00:59:09.315 ************************************ 00:59:09.315 05:58:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:59:09.315 05:58:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56821 00:59:09.315 05:58:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:59:09.315 05:58:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:59:09.315 05:58:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:59:09.315 [2024-12-09 05:58:03.730853] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:09.315 [2024-12-09 05:58:03.730921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56821 ] 00:59:09.315 [2024-12-09 05:58:03.883501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:09.603 [2024-12-09 05:58:03.929887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:09.603 [2024-12-09 05:58:03.985733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56821 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56821 ']' 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56821 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56821 00:59:14.954 killing process with pid 56821 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56821' 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56821 00:59:14.954 05:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56821 00:59:14.954 00:59:14.954 real 0m5.374s 00:59:14.954 user 0m5.040s 00:59:14.954 sys 0m0.272s 00:59:14.954 05:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:14.954 ************************************ 00:59:14.954 END TEST skip_rpc 00:59:14.954 ************************************ 00:59:14.954 05:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:59:14.954 05:58:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:59:14.954 05:58:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:14.954 05:58:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:14.954 05:58:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:59:14.954 ************************************ 00:59:14.954 START TEST skip_rpc_with_json 00:59:14.954 ************************************ 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56902 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56902 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56902 ']' 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:14.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:14.954 05:58:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:59:14.954 [2024-12-09 05:58:09.180884] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:14.954 [2024-12-09 05:58:09.181109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56902 ] 00:59:14.954 [2024-12-09 05:58:09.329706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:14.954 [2024-12-09 05:58:09.368917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:14.954 [2024-12-09 05:58:09.423787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:15.522 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:15.522 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:59:15.522 05:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:59:15.522 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.522 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:59:15.523 [2024-12-09 05:58:10.034592] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:59:15.523 request: 00:59:15.523 { 00:59:15.523 "trtype": "tcp", 00:59:15.523 "method": "nvmf_get_transports", 00:59:15.523 "req_id": 1 00:59:15.523 } 00:59:15.523 Got JSON-RPC error response 00:59:15.523 response: 00:59:15.523 { 00:59:15.523 "code": -19, 00:59:15.523 "message": "No such device" 00:59:15.523 } 00:59:15.523 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:59:15.523 05:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:59:15.523 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.523 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:59:15.523 [2024-12-09 05:58:10.050655] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:15.523 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:15.523 05:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:59:15.523 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.523 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:59:15.782 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:15.782 05:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:59:15.782 { 00:59:15.782 "subsystems": [ 00:59:15.782 { 00:59:15.782 "subsystem": "fsdev", 00:59:15.782 "config": [ 00:59:15.782 { 00:59:15.782 "method": "fsdev_set_opts", 00:59:15.782 "params": { 00:59:15.782 "fsdev_io_pool_size": 65535, 00:59:15.782 "fsdev_io_cache_size": 256 00:59:15.782 } 00:59:15.782 } 00:59:15.782 ] 00:59:15.782 }, 00:59:15.782 { 00:59:15.782 "subsystem": "keyring", 00:59:15.782 "config": [] 00:59:15.782 }, 00:59:15.782 { 00:59:15.782 "subsystem": "iobuf", 00:59:15.782 "config": [ 00:59:15.782 { 00:59:15.782 "method": "iobuf_set_options", 00:59:15.782 "params": { 00:59:15.782 "small_pool_count": 8192, 00:59:15.782 "large_pool_count": 1024, 00:59:15.782 "small_bufsize": 8192, 00:59:15.782 "large_bufsize": 135168, 00:59:15.782 "enable_numa": false 00:59:15.782 } 00:59:15.782 } 00:59:15.782 ] 00:59:15.782 }, 00:59:15.782 { 00:59:15.782 "subsystem": "sock", 00:59:15.782 "config": [ 00:59:15.782 { 00:59:15.782 "method": "sock_set_default_impl", 00:59:15.782 "params": { 00:59:15.782 "impl_name": "uring" 00:59:15.782 } 00:59:15.782 }, 00:59:15.782 { 00:59:15.782 "method": "sock_impl_set_options", 00:59:15.782 "params": { 00:59:15.782 "impl_name": "ssl", 00:59:15.782 "recv_buf_size": 4096, 00:59:15.782 "send_buf_size": 4096, 00:59:15.782 "enable_recv_pipe": true, 00:59:15.782 "enable_quickack": false, 00:59:15.782 "enable_placement_id": 0, 00:59:15.782 "enable_zerocopy_send_server": true, 00:59:15.782 "enable_zerocopy_send_client": false, 00:59:15.782 "zerocopy_threshold": 0, 00:59:15.782 "tls_version": 0, 00:59:15.782 "enable_ktls": false 00:59:15.782 } 00:59:15.782 }, 00:59:15.782 { 00:59:15.782 "method": "sock_impl_set_options", 00:59:15.782 "params": { 00:59:15.782 "impl_name": "posix", 00:59:15.782 "recv_buf_size": 2097152, 00:59:15.782 "send_buf_size": 2097152, 00:59:15.782 "enable_recv_pipe": true, 00:59:15.782 "enable_quickack": false, 00:59:15.782 "enable_placement_id": 0, 00:59:15.782 "enable_zerocopy_send_server": true, 00:59:15.782 "enable_zerocopy_send_client": false, 00:59:15.782 "zerocopy_threshold": 0, 00:59:15.782 "tls_version": 0, 00:59:15.782 "enable_ktls": false 00:59:15.782 } 00:59:15.782 }, 00:59:15.782 { 00:59:15.782 "method": "sock_impl_set_options", 00:59:15.782 "params": { 00:59:15.782 "impl_name": "uring", 00:59:15.782 "recv_buf_size": 2097152, 00:59:15.782 "send_buf_size": 2097152, 00:59:15.782 "enable_recv_pipe": true, 00:59:15.782 "enable_quickack": false, 00:59:15.782 "enable_placement_id": 0, 00:59:15.782 "enable_zerocopy_send_server": false, 00:59:15.782 "enable_zerocopy_send_client": false, 00:59:15.782 "zerocopy_threshold": 0, 00:59:15.782 "tls_version": 0, 00:59:15.782 "enable_ktls": false 00:59:15.782 } 00:59:15.782 } 00:59:15.782 ] 00:59:15.782 }, 00:59:15.782 { 00:59:15.782 "subsystem": "vmd", 00:59:15.782 "config": [] 00:59:15.782 }, 00:59:15.782 { 00:59:15.782 "subsystem": "accel", 00:59:15.782 "config": [ 00:59:15.782 { 00:59:15.782 "method": "accel_set_options", 00:59:15.782 "params": { 00:59:15.782 "small_cache_size": 128, 00:59:15.782 "large_cache_size": 16, 00:59:15.782 "task_count": 2048, 00:59:15.782 "sequence_count": 2048, 00:59:15.782 "buf_count": 2048 00:59:15.782 } 00:59:15.783 } 00:59:15.783 ] 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "subsystem": "bdev", 00:59:15.783 "config": [ 00:59:15.783 { 00:59:15.783 "method": "bdev_set_options", 00:59:15.783 "params": { 00:59:15.783 "bdev_io_pool_size": 65535, 00:59:15.783 "bdev_io_cache_size": 256, 00:59:15.783 "bdev_auto_examine": true, 00:59:15.783 "iobuf_small_cache_size": 128, 00:59:15.783 "iobuf_large_cache_size": 16 00:59:15.783 } 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "method": "bdev_raid_set_options", 00:59:15.783 "params": { 00:59:15.783 "process_window_size_kb": 1024, 00:59:15.783 "process_max_bandwidth_mb_sec": 0 00:59:15.783 } 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "method": "bdev_iscsi_set_options", 00:59:15.783 "params": { 00:59:15.783 "timeout_sec": 30 00:59:15.783 } 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "method": "bdev_nvme_set_options", 00:59:15.783 "params": { 00:59:15.783 "action_on_timeout": "none", 00:59:15.783 "timeout_us": 0, 00:59:15.783 "timeout_admin_us": 0, 00:59:15.783 "keep_alive_timeout_ms": 10000, 00:59:15.783 "arbitration_burst": 0, 00:59:15.783 "low_priority_weight": 0, 00:59:15.783 "medium_priority_weight": 0, 00:59:15.783 "high_priority_weight": 0, 00:59:15.783 "nvme_adminq_poll_period_us": 10000, 00:59:15.783 "nvme_ioq_poll_period_us": 0, 00:59:15.783 "io_queue_requests": 0, 00:59:15.783 "delay_cmd_submit": true, 00:59:15.783 "transport_retry_count": 4, 00:59:15.783 "bdev_retry_count": 3, 00:59:15.783 "transport_ack_timeout": 0, 00:59:15.783 "ctrlr_loss_timeout_sec": 0, 00:59:15.783 "reconnect_delay_sec": 0, 00:59:15.783 "fast_io_fail_timeout_sec": 0, 00:59:15.783 "disable_auto_failback": false, 00:59:15.783 "generate_uuids": false, 00:59:15.783 "transport_tos": 0, 00:59:15.783 "nvme_error_stat": false, 00:59:15.783 "rdma_srq_size": 0, 00:59:15.783 "io_path_stat": false, 00:59:15.783 "allow_accel_sequence": false, 00:59:15.783 "rdma_max_cq_size": 0, 00:59:15.783 "rdma_cm_event_timeout_ms": 0, 00:59:15.783 "dhchap_digests": [ 00:59:15.783 "sha256", 00:59:15.783 "sha384", 00:59:15.783 "sha512" 00:59:15.783 ], 00:59:15.783 "dhchap_dhgroups": [ 00:59:15.783 "null", 00:59:15.783 "ffdhe2048", 00:59:15.783 "ffdhe3072", 00:59:15.783 "ffdhe4096", 00:59:15.783 "ffdhe6144", 00:59:15.783 "ffdhe8192" 00:59:15.783 ] 00:59:15.783 } 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "method": "bdev_nvme_set_hotplug", 00:59:15.783 "params": { 00:59:15.783 "period_us": 100000, 00:59:15.783 "enable": false 00:59:15.783 } 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "method": "bdev_wait_for_examine" 00:59:15.783 } 00:59:15.783 ] 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "subsystem": "scsi", 00:59:15.783 "config": null 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "subsystem": "scheduler", 00:59:15.783 "config": [ 00:59:15.783 { 00:59:15.783 "method": "framework_set_scheduler", 00:59:15.783 "params": { 00:59:15.783 "name": "static" 00:59:15.783 } 00:59:15.783 } 00:59:15.783 ] 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "subsystem": "vhost_scsi", 00:59:15.783 "config": [] 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "subsystem": "vhost_blk", 00:59:15.783 "config": [] 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "subsystem": "ublk", 00:59:15.783 "config": [] 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "subsystem": "nbd", 00:59:15.783 "config": [] 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "subsystem": "nvmf", 00:59:15.783 "config": [ 00:59:15.783 { 00:59:15.783 "method": "nvmf_set_config", 00:59:15.783 "params": { 00:59:15.783 "discovery_filter": "match_any", 00:59:15.783 "admin_cmd_passthru": { 00:59:15.783 "identify_ctrlr": false 00:59:15.783 }, 00:59:15.783 "dhchap_digests": [ 00:59:15.783 "sha256", 00:59:15.783 "sha384", 00:59:15.783 "sha512" 00:59:15.783 ], 00:59:15.783 "dhchap_dhgroups": [ 00:59:15.783 "null", 00:59:15.783 "ffdhe2048", 00:59:15.783 "ffdhe3072", 00:59:15.783 "ffdhe4096", 00:59:15.783 "ffdhe6144", 00:59:15.783 "ffdhe8192" 00:59:15.783 ] 00:59:15.783 } 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "method": "nvmf_set_max_subsystems", 00:59:15.783 "params": { 00:59:15.783 "max_subsystems": 1024 00:59:15.783 } 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "method": "nvmf_set_crdt", 00:59:15.783 "params": { 00:59:15.783 "crdt1": 0, 00:59:15.783 "crdt2": 0, 00:59:15.783 "crdt3": 0 00:59:15.783 } 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "method": "nvmf_create_transport", 00:59:15.783 "params": { 00:59:15.783 "trtype": "TCP", 00:59:15.783 "max_queue_depth": 128, 00:59:15.783 "max_io_qpairs_per_ctrlr": 127, 00:59:15.783 "in_capsule_data_size": 4096, 00:59:15.783 "max_io_size": 131072, 00:59:15.783 "io_unit_size": 131072, 00:59:15.783 "max_aq_depth": 128, 00:59:15.783 "num_shared_buffers": 511, 00:59:15.783 "buf_cache_size": 4294967295, 00:59:15.783 "dif_insert_or_strip": false, 00:59:15.783 "zcopy": false, 00:59:15.783 "c2h_success": true, 00:59:15.783 "sock_priority": 0, 00:59:15.783 "abort_timeout_sec": 1, 00:59:15.783 "ack_timeout": 0, 00:59:15.783 "data_wr_pool_size": 0 00:59:15.783 } 00:59:15.783 } 00:59:15.783 ] 00:59:15.783 }, 00:59:15.783 { 00:59:15.783 "subsystem": "iscsi", 00:59:15.783 "config": [ 00:59:15.783 { 00:59:15.783 "method": "iscsi_set_options", 00:59:15.783 "params": { 00:59:15.783 "node_base": "iqn.2016-06.io.spdk", 00:59:15.783 "max_sessions": 128, 00:59:15.783 "max_connections_per_session": 2, 00:59:15.783 "max_queue_depth": 64, 00:59:15.783 "default_time2wait": 2, 00:59:15.783 "default_time2retain": 20, 00:59:15.783 "first_burst_length": 8192, 00:59:15.783 "immediate_data": true, 00:59:15.783 "allow_duplicated_isid": false, 00:59:15.783 "error_recovery_level": 0, 00:59:15.783 "nop_timeout": 60, 00:59:15.783 "nop_in_interval": 30, 00:59:15.783 "disable_chap": false, 00:59:15.783 "require_chap": false, 00:59:15.783 "mutual_chap": false, 00:59:15.783 "chap_group": 0, 00:59:15.783 "max_large_datain_per_connection": 64, 00:59:15.783 "max_r2t_per_connection": 4, 00:59:15.783 "pdu_pool_size": 36864, 00:59:15.783 "immediate_data_pool_size": 16384, 00:59:15.783 "data_out_pool_size": 2048 00:59:15.783 } 00:59:15.783 } 00:59:15.783 ] 00:59:15.783 } 00:59:15.783 ] 00:59:15.783 } 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56902 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56902 ']' 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56902 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56902 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:15.783 killing process with pid 56902 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56902' 00:59:15.783 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56902 00:59:15.784 05:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56902 00:59:16.043 05:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56929 00:59:16.043 05:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:59:16.043 05:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56929 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56929 ']' 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56929 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56929 00:59:21.322 killing process with pid 56929 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56929' 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56929 00:59:21.322 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56929 00:59:21.582 05:58:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:59:21.582 05:58:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:59:21.582 00:59:21.582 real 0m6.852s 00:59:21.582 user 0m6.531s 00:59:21.582 sys 0m0.637s 00:59:21.582 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:21.582 ************************************ 00:59:21.582 END TEST skip_rpc_with_json 00:59:21.582 ************************************ 00:59:21.582 05:58:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:59:21.582 05:58:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:59:21.582 05:58:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:21.582 05:58:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:21.582 05:58:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:59:21.582 ************************************ 00:59:21.582 START TEST skip_rpc_with_delay 00:59:21.582 ************************************ 00:59:21.582 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:59:21.582 05:58:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:59:21.582 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:59:21.582 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:59:21.583 [2024-12-09 05:58:16.122856] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:59:21.583 00:59:21.583 real 0m0.085s 00:59:21.583 user 0m0.049s 00:59:21.583 sys 0m0.035s 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:21.583 ************************************ 00:59:21.583 END TEST skip_rpc_with_delay 00:59:21.583 ************************************ 00:59:21.583 05:58:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:59:21.842 05:58:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:59:21.842 05:58:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:59:21.842 05:58:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:59:21.842 05:58:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:21.842 05:58:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:21.842 05:58:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:59:21.842 ************************************ 00:59:21.842 START TEST exit_on_failed_rpc_init 00:59:21.842 ************************************ 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57039 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57039 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57039 ']' 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:21.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:21.842 05:58:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:59:21.842 [2024-12-09 05:58:16.284532] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:21.842 [2024-12-09 05:58:16.284608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57039 ] 00:59:21.842 [2024-12-09 05:58:16.427574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:22.101 [2024-12-09 05:58:16.468496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:22.101 [2024-12-09 05:58:16.523885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:22.669 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:59:22.670 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:59:22.670 [2024-12-09 05:58:17.185899] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:22.670 [2024-12-09 05:58:17.186128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57057 ] 00:59:22.929 [2024-12-09 05:58:17.336596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:22.929 [2024-12-09 05:58:17.393566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:22.929 [2024-12-09 05:58:17.393797] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:59:22.929 [2024-12-09 05:58:17.393940] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:59:22.929 [2024-12-09 05:58:17.393970] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57039 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57039 ']' 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57039 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57039 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57039' 00:59:22.929 killing process with pid 57039 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57039 00:59:22.929 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57039 00:59:23.498 ************************************ 00:59:23.498 END TEST exit_on_failed_rpc_init 00:59:23.498 ************************************ 00:59:23.498 00:59:23.498 real 0m1.590s 00:59:23.498 user 0m1.735s 00:59:23.498 sys 0m0.407s 00:59:23.498 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:23.498 05:58:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:59:23.498 05:58:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:59:23.498 00:59:23.498 real 0m14.458s 00:59:23.498 user 0m13.571s 00:59:23.498 sys 0m1.697s 00:59:23.498 05:58:17 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:23.498 05:58:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:59:23.498 ************************************ 00:59:23.498 END TEST skip_rpc 00:59:23.498 ************************************ 00:59:23.498 05:58:17 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:59:23.498 05:58:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:23.498 05:58:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:23.498 05:58:17 -- common/autotest_common.sh@10 -- # set +x 00:59:23.498 ************************************ 00:59:23.498 START TEST rpc_client 00:59:23.498 ************************************ 00:59:23.498 05:58:17 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:59:23.498 * Looking for test storage... 00:59:23.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:23.758 05:58:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:23.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:23.758 --rc genhtml_branch_coverage=1 00:59:23.758 --rc genhtml_function_coverage=1 00:59:23.758 --rc genhtml_legend=1 00:59:23.758 --rc geninfo_all_blocks=1 00:59:23.758 --rc geninfo_unexecuted_blocks=1 00:59:23.758 00:59:23.758 ' 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:23.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:23.758 --rc genhtml_branch_coverage=1 00:59:23.758 --rc genhtml_function_coverage=1 00:59:23.758 --rc genhtml_legend=1 00:59:23.758 --rc geninfo_all_blocks=1 00:59:23.758 --rc geninfo_unexecuted_blocks=1 00:59:23.758 00:59:23.758 ' 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:23.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:23.758 --rc genhtml_branch_coverage=1 00:59:23.758 --rc genhtml_function_coverage=1 00:59:23.758 --rc genhtml_legend=1 00:59:23.758 --rc geninfo_all_blocks=1 00:59:23.758 --rc geninfo_unexecuted_blocks=1 00:59:23.758 00:59:23.758 ' 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:23.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:23.758 --rc genhtml_branch_coverage=1 00:59:23.758 --rc genhtml_function_coverage=1 00:59:23.758 --rc genhtml_legend=1 00:59:23.758 --rc geninfo_all_blocks=1 00:59:23.758 --rc geninfo_unexecuted_blocks=1 00:59:23.758 00:59:23.758 ' 00:59:23.758 05:58:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:59:23.758 OK 00:59:23.758 05:58:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:59:23.758 00:59:23.758 real 0m0.268s 00:59:23.758 user 0m0.150s 00:59:23.758 sys 0m0.134s 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:23.758 05:58:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:59:23.758 ************************************ 00:59:23.758 END TEST rpc_client 00:59:23.758 ************************************ 00:59:23.758 05:58:18 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:59:23.758 05:58:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:23.758 05:58:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:23.758 05:58:18 -- common/autotest_common.sh@10 -- # set +x 00:59:23.758 ************************************ 00:59:23.758 START TEST json_config 00:59:23.758 ************************************ 00:59:23.758 05:58:18 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:59:24.018 05:58:18 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:24.018 05:58:18 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:59:24.018 05:58:18 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:24.018 05:58:18 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:24.018 05:58:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:24.018 05:58:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:24.018 05:58:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:24.019 05:58:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:59:24.019 05:58:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:59:24.019 05:58:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:59:24.019 05:58:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:59:24.019 05:58:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:59:24.019 05:58:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:59:24.019 05:58:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:59:24.019 05:58:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:24.019 05:58:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:59:24.019 05:58:18 json_config -- scripts/common.sh@345 -- # : 1 00:59:24.019 05:58:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:24.019 05:58:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:24.019 05:58:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:59:24.019 05:58:18 json_config -- scripts/common.sh@353 -- # local d=1 00:59:24.019 05:58:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:24.019 05:58:18 json_config -- scripts/common.sh@355 -- # echo 1 00:59:24.019 05:58:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:59:24.019 05:58:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:59:24.019 05:58:18 json_config -- scripts/common.sh@353 -- # local d=2 00:59:24.019 05:58:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:24.019 05:58:18 json_config -- scripts/common.sh@355 -- # echo 2 00:59:24.019 05:58:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:59:24.019 05:58:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:24.019 05:58:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:24.019 05:58:18 json_config -- scripts/common.sh@368 -- # return 0 00:59:24.019 05:58:18 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:24.019 05:58:18 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:24.019 --rc genhtml_branch_coverage=1 00:59:24.019 --rc genhtml_function_coverage=1 00:59:24.019 --rc genhtml_legend=1 00:59:24.019 --rc geninfo_all_blocks=1 00:59:24.019 --rc geninfo_unexecuted_blocks=1 00:59:24.019 00:59:24.019 ' 00:59:24.019 05:58:18 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:24.019 --rc genhtml_branch_coverage=1 00:59:24.019 --rc genhtml_function_coverage=1 00:59:24.019 --rc genhtml_legend=1 00:59:24.019 --rc geninfo_all_blocks=1 00:59:24.019 --rc geninfo_unexecuted_blocks=1 00:59:24.019 00:59:24.019 ' 00:59:24.019 05:58:18 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:24.019 --rc genhtml_branch_coverage=1 00:59:24.019 --rc genhtml_function_coverage=1 00:59:24.019 --rc genhtml_legend=1 00:59:24.019 --rc geninfo_all_blocks=1 00:59:24.019 --rc geninfo_unexecuted_blocks=1 00:59:24.019 00:59:24.019 ' 00:59:24.019 05:58:18 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:24.019 --rc genhtml_branch_coverage=1 00:59:24.019 --rc genhtml_function_coverage=1 00:59:24.019 --rc genhtml_legend=1 00:59:24.019 --rc geninfo_all_blocks=1 00:59:24.019 --rc geninfo_unexecuted_blocks=1 00:59:24.019 00:59:24.019 ' 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:59:24.019 05:58:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:59:24.019 05:58:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:24.019 05:58:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:24.019 05:58:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:24.019 05:58:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:24.019 05:58:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:24.019 05:58:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:24.019 05:58:18 json_config -- paths/export.sh@5 -- # export PATH 00:59:24.019 05:58:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@51 -- # : 0 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:59:24.019 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:59:24.019 05:58:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:59:24.019 INFO: JSON configuration test init 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:59:24.019 05:58:18 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:24.020 05:58:18 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:24.020 05:58:18 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:59:24.020 05:58:18 json_config -- json_config/common.sh@9 -- # local app=target 00:59:24.020 05:58:18 json_config -- json_config/common.sh@10 -- # shift 00:59:24.020 Waiting for target to run... 00:59:24.020 05:58:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:59:24.020 05:58:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:59:24.020 05:58:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:59:24.020 05:58:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:59:24.020 05:58:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:59:24.020 05:58:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57195 00:59:24.020 05:58:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:59:24.020 05:58:18 json_config -- json_config/common.sh@25 -- # waitforlisten 57195 /var/tmp/spdk_tgt.sock 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@835 -- # '[' -z 57195 ']' 00:59:24.020 05:58:18 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:59:24.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:24.020 05:58:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:24.279 [2024-12-09 05:58:18.615564] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:24.279 [2024-12-09 05:58:18.615740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57195 ] 00:59:24.540 [2024-12-09 05:58:19.058911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:24.799 [2024-12-09 05:58:19.127307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:25.057 05:58:19 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:25.057 05:58:19 json_config -- common/autotest_common.sh@868 -- # return 0 00:59:25.057 05:58:19 json_config -- json_config/common.sh@26 -- # echo '' 00:59:25.057 00:59:25.057 05:58:19 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:59:25.057 05:58:19 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:59:25.057 05:58:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:25.057 05:58:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:25.057 05:58:19 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:59:25.057 05:58:19 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:59:25.057 05:58:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:25.057 05:58:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:25.057 05:58:19 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:59:25.057 05:58:19 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:59:25.057 05:58:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:59:25.316 [2024-12-09 05:58:19.747030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:25.577 05:58:19 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:59:25.577 05:58:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:59:25.577 05:58:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:25.577 05:58:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:25.577 05:58:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:59:25.577 05:58:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:59:25.577 05:58:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:59:25.577 05:58:19 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:59:25.577 05:58:19 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:59:25.577 05:58:19 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:59:25.577 05:58:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:59:25.577 05:58:19 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:59:25.835 05:58:20 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@51 -- # local get_types 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@54 -- # sort 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:59:25.836 05:58:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:25.836 05:58:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@62 -- # return 0 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:59:25.836 05:58:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:25.836 05:58:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:59:25.836 05:58:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:59:25.836 05:58:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:59:26.095 MallocForNvmf0 00:59:26.095 05:58:20 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:59:26.095 05:58:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:59:26.095 MallocForNvmf1 00:59:26.095 05:58:20 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:59:26.095 05:58:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:59:26.355 [2024-12-09 05:58:20.839927] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:26.355 05:58:20 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:59:26.355 05:58:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:59:26.615 05:58:21 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:59:26.615 05:58:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:59:26.875 05:58:21 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:59:26.875 05:58:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:59:27.136 05:58:21 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:59:27.136 05:58:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:59:27.136 [2024-12-09 05:58:21.647001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:59:27.136 05:58:21 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:59:27.136 05:58:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:27.136 05:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:27.395 05:58:21 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:59:27.395 05:58:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:27.395 05:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:27.395 05:58:21 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:59:27.395 05:58:21 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:59:27.395 05:58:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:59:27.655 MallocBdevForConfigChangeCheck 00:59:27.655 05:58:21 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:59:27.655 05:58:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:27.655 05:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:27.655 05:58:22 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:59:27.655 05:58:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:59:27.916 INFO: shutting down applications... 00:59:27.916 05:58:22 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:59:27.916 05:58:22 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:59:27.916 05:58:22 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:59:27.916 05:58:22 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:59:27.916 05:58:22 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:59:28.176 Calling clear_iscsi_subsystem 00:59:28.176 Calling clear_nvmf_subsystem 00:59:28.176 Calling clear_nbd_subsystem 00:59:28.176 Calling clear_ublk_subsystem 00:59:28.176 Calling clear_vhost_blk_subsystem 00:59:28.176 Calling clear_vhost_scsi_subsystem 00:59:28.176 Calling clear_bdev_subsystem 00:59:28.176 05:58:22 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:59:28.176 05:58:22 json_config -- json_config/json_config.sh@350 -- # count=100 00:59:28.176 05:58:22 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:59:28.176 05:58:22 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:59:28.176 05:58:22 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:59:28.176 05:58:22 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:59:28.748 05:58:23 json_config -- json_config/json_config.sh@352 -- # break 00:59:28.748 05:58:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:59:28.748 05:58:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:59:28.748 05:58:23 json_config -- json_config/common.sh@31 -- # local app=target 00:59:28.748 05:58:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:59:28.748 05:58:23 json_config -- json_config/common.sh@35 -- # [[ -n 57195 ]] 00:59:28.748 05:58:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57195 00:59:28.748 05:58:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:59:28.748 05:58:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:59:28.748 05:58:23 json_config -- json_config/common.sh@41 -- # kill -0 57195 00:59:28.748 05:58:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:59:29.009 05:58:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:59:29.009 05:58:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:59:29.009 05:58:23 json_config -- json_config/common.sh@41 -- # kill -0 57195 00:59:29.009 05:58:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:59:29.009 SPDK target shutdown done 00:59:29.009 INFO: relaunching applications... 00:59:29.009 05:58:23 json_config -- json_config/common.sh@43 -- # break 00:59:29.009 05:58:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:59:29.009 05:58:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:59:29.009 05:58:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:59:29.009 05:58:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:59:29.009 05:58:23 json_config -- json_config/common.sh@9 -- # local app=target 00:59:29.009 05:58:23 json_config -- json_config/common.sh@10 -- # shift 00:59:29.009 05:58:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:59:29.009 05:58:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:59:29.009 05:58:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:59:29.009 05:58:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:59:29.009 05:58:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:59:29.009 05:58:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57381 00:59:29.009 05:58:23 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:59:29.009 Waiting for target to run... 00:59:29.009 05:58:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:59:29.009 05:58:23 json_config -- json_config/common.sh@25 -- # waitforlisten 57381 /var/tmp/spdk_tgt.sock 00:59:29.009 05:58:23 json_config -- common/autotest_common.sh@835 -- # '[' -z 57381 ']' 00:59:29.009 05:58:23 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:59:29.009 05:58:23 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:29.009 05:58:23 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:59:29.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:59:29.009 05:58:23 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:29.009 05:58:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:29.270 [2024-12-09 05:58:23.645391] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:29.270 [2024-12-09 05:58:23.645663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57381 ] 00:59:29.867 [2024-12-09 05:58:24.186115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:29.867 [2024-12-09 05:58:24.235199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:29.867 [2024-12-09 05:58:24.373925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:30.128 [2024-12-09 05:58:24.593208] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:30.128 [2024-12-09 05:58:24.625214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:59:30.128 00:59:30.128 INFO: Checking if target configuration is the same... 00:59:30.128 05:58:24 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:30.128 05:58:24 json_config -- common/autotest_common.sh@868 -- # return 0 00:59:30.128 05:58:24 json_config -- json_config/common.sh@26 -- # echo '' 00:59:30.128 05:58:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:59:30.128 05:58:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:59:30.128 05:58:24 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:59:30.128 05:58:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:59:30.128 05:58:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:59:30.128 + '[' 2 -ne 2 ']' 00:59:30.128 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:59:30.128 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:59:30.128 + rootdir=/home/vagrant/spdk_repo/spdk 00:59:30.128 +++ basename /dev/fd/62 00:59:30.128 ++ mktemp /tmp/62.XXX 00:59:30.128 + tmp_file_1=/tmp/62.i1X 00:59:30.128 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:59:30.128 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:59:30.128 + tmp_file_2=/tmp/spdk_tgt_config.json.rOB 00:59:30.128 + ret=0 00:59:30.128 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:59:30.697 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:59:30.697 + diff -u /tmp/62.i1X /tmp/spdk_tgt_config.json.rOB 00:59:30.697 INFO: JSON config files are the same 00:59:30.697 + echo 'INFO: JSON config files are the same' 00:59:30.697 + rm /tmp/62.i1X /tmp/spdk_tgt_config.json.rOB 00:59:30.697 + exit 0 00:59:30.697 INFO: changing configuration and checking if this can be detected... 00:59:30.697 05:58:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:59:30.697 05:58:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:59:30.697 05:58:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:59:30.697 05:58:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:59:30.698 05:58:25 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:59:30.698 05:58:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:59:30.698 05:58:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:59:30.956 + '[' 2 -ne 2 ']' 00:59:30.956 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:59:30.956 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:59:30.956 + rootdir=/home/vagrant/spdk_repo/spdk 00:59:30.956 +++ basename /dev/fd/62 00:59:30.956 ++ mktemp /tmp/62.XXX 00:59:30.956 + tmp_file_1=/tmp/62.zyr 00:59:30.956 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:59:30.956 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:59:30.956 + tmp_file_2=/tmp/spdk_tgt_config.json.vrT 00:59:30.956 + ret=0 00:59:30.956 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:59:31.215 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:59:31.215 + diff -u /tmp/62.zyr /tmp/spdk_tgt_config.json.vrT 00:59:31.215 + ret=1 00:59:31.215 + echo '=== Start of file: /tmp/62.zyr ===' 00:59:31.215 + cat /tmp/62.zyr 00:59:31.215 + echo '=== End of file: /tmp/62.zyr ===' 00:59:31.215 + echo '' 00:59:31.215 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vrT ===' 00:59:31.215 + cat /tmp/spdk_tgt_config.json.vrT 00:59:31.215 + echo '=== End of file: /tmp/spdk_tgt_config.json.vrT ===' 00:59:31.215 + echo '' 00:59:31.215 + rm /tmp/62.zyr /tmp/spdk_tgt_config.json.vrT 00:59:31.215 + exit 1 00:59:31.215 INFO: configuration change detected. 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 57381 ]] 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:31.215 05:58:25 json_config -- json_config/json_config.sh@330 -- # killprocess 57381 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@954 -- # '[' -z 57381 ']' 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@958 -- # kill -0 57381 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@959 -- # uname 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:31.215 05:58:25 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57381 00:59:31.474 killing process with pid 57381 00:59:31.474 05:58:25 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:31.474 05:58:25 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:31.474 05:58:25 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57381' 00:59:31.474 05:58:25 json_config -- common/autotest_common.sh@973 -- # kill 57381 00:59:31.474 05:58:25 json_config -- common/autotest_common.sh@978 -- # wait 57381 00:59:31.474 05:58:26 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:59:31.474 05:58:26 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:59:31.474 05:58:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:31.474 05:58:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:31.734 INFO: Success 00:59:31.734 05:58:26 json_config -- json_config/json_config.sh@335 -- # return 0 00:59:31.734 05:58:26 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:59:31.734 ************************************ 00:59:31.734 END TEST json_config 00:59:31.734 ************************************ 00:59:31.734 00:59:31.734 real 0m7.785s 00:59:31.734 user 0m10.213s 00:59:31.734 sys 0m2.164s 00:59:31.734 05:58:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:31.734 05:58:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:59:31.734 05:58:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:59:31.734 05:58:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:31.734 05:58:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:31.734 05:58:26 -- common/autotest_common.sh@10 -- # set +x 00:59:31.734 ************************************ 00:59:31.734 START TEST json_config_extra_key 00:59:31.734 ************************************ 00:59:31.734 05:58:26 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:59:31.734 05:58:26 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:31.734 05:58:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:59:31.734 05:58:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:31.995 05:58:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:31.995 05:58:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:59:31.995 05:58:26 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:31.995 05:58:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:31.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:31.995 --rc genhtml_branch_coverage=1 00:59:31.995 --rc genhtml_function_coverage=1 00:59:31.995 --rc genhtml_legend=1 00:59:31.995 --rc geninfo_all_blocks=1 00:59:31.995 --rc geninfo_unexecuted_blocks=1 00:59:31.995 00:59:31.995 ' 00:59:31.995 05:58:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:31.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:31.995 --rc genhtml_branch_coverage=1 00:59:31.995 --rc genhtml_function_coverage=1 00:59:31.995 --rc genhtml_legend=1 00:59:31.995 --rc geninfo_all_blocks=1 00:59:31.995 --rc geninfo_unexecuted_blocks=1 00:59:31.995 00:59:31.995 ' 00:59:31.995 05:58:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:31.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:31.995 --rc genhtml_branch_coverage=1 00:59:31.996 --rc genhtml_function_coverage=1 00:59:31.996 --rc genhtml_legend=1 00:59:31.996 --rc geninfo_all_blocks=1 00:59:31.996 --rc geninfo_unexecuted_blocks=1 00:59:31.996 00:59:31.996 ' 00:59:31.996 05:58:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:31.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:31.996 --rc genhtml_branch_coverage=1 00:59:31.996 --rc genhtml_function_coverage=1 00:59:31.996 --rc genhtml_legend=1 00:59:31.996 --rc geninfo_all_blocks=1 00:59:31.996 --rc geninfo_unexecuted_blocks=1 00:59:31.996 00:59:31.996 ' 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:59:31.996 05:58:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:59:31.996 05:58:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:31.996 05:58:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:31.996 05:58:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:31.996 05:58:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:31.996 05:58:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:31.996 05:58:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:31.996 05:58:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:59:31.996 05:58:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:59:31.996 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:59:31.996 05:58:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:59:31.996 INFO: launching applications... 00:59:31.996 05:58:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57535 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:59:31.996 Waiting for target to run... 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57535 /var/tmp/spdk_tgt.sock 00:59:31.996 05:58:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57535 ']' 00:59:31.996 05:58:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:59:31.996 05:58:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:59:31.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:59:31.996 05:58:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:31.996 05:58:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:59:31.996 05:58:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:31.996 05:58:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:59:31.996 [2024-12-09 05:58:26.477123] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:31.996 [2024-12-09 05:58:26.477195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57535 ] 00:59:32.567 [2024-12-09 05:58:26.856824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:32.567 [2024-12-09 05:58:26.895185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:32.567 [2024-12-09 05:58:26.924852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:32.827 05:58:27 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:32.827 05:58:27 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:59:32.827 00:59:32.827 05:58:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:59:32.827 INFO: shutting down applications... 00:59:32.827 05:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:59:32.827 05:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:59:32.827 05:58:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:59:32.827 05:58:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:59:32.827 05:58:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57535 ]] 00:59:32.827 05:58:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57535 00:59:32.827 05:58:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:59:32.827 05:58:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:59:32.827 05:58:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57535 00:59:32.827 05:58:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:59:33.398 05:58:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:59:33.398 05:58:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:59:33.398 05:58:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57535 00:59:33.398 05:58:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:59:33.398 05:58:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:59:33.398 05:58:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:59:33.398 SPDK target shutdown done 00:59:33.398 05:58:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:59:33.398 Success 00:59:33.398 05:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:59:33.398 ************************************ 00:59:33.398 END TEST json_config_extra_key 00:59:33.398 ************************************ 00:59:33.398 00:59:33.398 real 0m1.660s 00:59:33.398 user 0m1.315s 00:59:33.398 sys 0m0.449s 00:59:33.398 05:58:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:33.398 05:58:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:59:33.398 05:58:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:59:33.398 05:58:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:33.398 05:58:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:33.398 05:58:27 -- common/autotest_common.sh@10 -- # set +x 00:59:33.398 ************************************ 00:59:33.398 START TEST alias_rpc 00:59:33.398 ************************************ 00:59:33.398 05:58:27 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:59:33.658 * Looking for test storage... 00:59:33.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:33.658 05:58:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:33.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:33.658 --rc genhtml_branch_coverage=1 00:59:33.658 --rc genhtml_function_coverage=1 00:59:33.658 --rc genhtml_legend=1 00:59:33.658 --rc geninfo_all_blocks=1 00:59:33.658 --rc geninfo_unexecuted_blocks=1 00:59:33.658 00:59:33.658 ' 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:33.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:33.658 --rc genhtml_branch_coverage=1 00:59:33.658 --rc genhtml_function_coverage=1 00:59:33.658 --rc genhtml_legend=1 00:59:33.658 --rc geninfo_all_blocks=1 00:59:33.658 --rc geninfo_unexecuted_blocks=1 00:59:33.658 00:59:33.658 ' 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:33.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:33.658 --rc genhtml_branch_coverage=1 00:59:33.658 --rc genhtml_function_coverage=1 00:59:33.658 --rc genhtml_legend=1 00:59:33.658 --rc geninfo_all_blocks=1 00:59:33.658 --rc geninfo_unexecuted_blocks=1 00:59:33.658 00:59:33.658 ' 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:33.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:33.658 --rc genhtml_branch_coverage=1 00:59:33.658 --rc genhtml_function_coverage=1 00:59:33.658 --rc genhtml_legend=1 00:59:33.658 --rc geninfo_all_blocks=1 00:59:33.658 --rc geninfo_unexecuted_blocks=1 00:59:33.658 00:59:33.658 ' 00:59:33.658 05:58:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:59:33.658 05:58:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57607 00:59:33.658 05:58:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:33.658 05:58:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57607 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57607 ']' 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:33.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:33.658 05:58:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:59:33.658 [2024-12-09 05:58:28.204611] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:33.658 [2024-12-09 05:58:28.204794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57607 ] 00:59:33.918 [2024-12-09 05:58:28.355029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:33.918 [2024-12-09 05:58:28.394469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:33.918 [2024-12-09 05:58:28.449292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:34.486 05:58:29 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:34.486 05:58:29 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:59:34.486 05:58:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:59:34.746 05:58:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57607 00:59:34.746 05:58:29 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57607 ']' 00:59:34.746 05:58:29 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57607 00:59:34.746 05:58:29 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:59:34.746 05:58:29 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:34.746 05:58:29 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57607 00:59:35.005 killing process with pid 57607 00:59:35.005 05:58:29 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:35.005 05:58:29 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:35.005 05:58:29 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57607' 00:59:35.005 05:58:29 alias_rpc -- common/autotest_common.sh@973 -- # kill 57607 00:59:35.005 05:58:29 alias_rpc -- common/autotest_common.sh@978 -- # wait 57607 00:59:35.265 ************************************ 00:59:35.265 END TEST alias_rpc 00:59:35.265 ************************************ 00:59:35.265 00:59:35.265 real 0m1.736s 00:59:35.265 user 0m1.817s 00:59:35.265 sys 0m0.467s 00:59:35.265 05:58:29 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:35.265 05:58:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:59:35.265 05:58:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:59:35.265 05:58:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:59:35.265 05:58:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:35.265 05:58:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:35.265 05:58:29 -- common/autotest_common.sh@10 -- # set +x 00:59:35.265 ************************************ 00:59:35.265 START TEST spdkcli_tcp 00:59:35.265 ************************************ 00:59:35.265 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:59:35.265 * Looking for test storage... 00:59:35.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:35.525 05:58:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:35.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:35.525 --rc genhtml_branch_coverage=1 00:59:35.525 --rc genhtml_function_coverage=1 00:59:35.525 --rc genhtml_legend=1 00:59:35.525 --rc geninfo_all_blocks=1 00:59:35.525 --rc geninfo_unexecuted_blocks=1 00:59:35.525 00:59:35.525 ' 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:35.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:35.525 --rc genhtml_branch_coverage=1 00:59:35.525 --rc genhtml_function_coverage=1 00:59:35.525 --rc genhtml_legend=1 00:59:35.525 --rc geninfo_all_blocks=1 00:59:35.525 --rc geninfo_unexecuted_blocks=1 00:59:35.525 00:59:35.525 ' 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:35.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:35.525 --rc genhtml_branch_coverage=1 00:59:35.525 --rc genhtml_function_coverage=1 00:59:35.525 --rc genhtml_legend=1 00:59:35.525 --rc geninfo_all_blocks=1 00:59:35.525 --rc geninfo_unexecuted_blocks=1 00:59:35.525 00:59:35.525 ' 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:35.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:35.525 --rc genhtml_branch_coverage=1 00:59:35.525 --rc genhtml_function_coverage=1 00:59:35.525 --rc genhtml_legend=1 00:59:35.525 --rc geninfo_all_blocks=1 00:59:35.525 --rc geninfo_unexecuted_blocks=1 00:59:35.525 00:59:35.525 ' 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57686 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:59:35.525 05:58:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57686 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57686 ']' 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:35.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:35.525 05:58:29 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:35.526 05:58:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:35.526 [2024-12-09 05:58:30.048219] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:35.526 [2024-12-09 05:58:30.048450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57686 ] 00:59:35.785 [2024-12-09 05:58:30.198870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:59:35.785 [2024-12-09 05:58:30.242116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:35.785 [2024-12-09 05:58:30.242141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:35.785 [2024-12-09 05:58:30.297380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:36.354 05:58:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:36.354 05:58:30 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:59:36.354 05:58:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:59:36.354 05:58:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57703 00:59:36.354 05:58:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:59:36.620 [ 00:59:36.620 "bdev_malloc_delete", 00:59:36.620 "bdev_malloc_create", 00:59:36.620 "bdev_null_resize", 00:59:36.620 "bdev_null_delete", 00:59:36.620 "bdev_null_create", 00:59:36.620 "bdev_nvme_cuse_unregister", 00:59:36.620 "bdev_nvme_cuse_register", 00:59:36.620 "bdev_opal_new_user", 00:59:36.620 "bdev_opal_set_lock_state", 00:59:36.620 "bdev_opal_delete", 00:59:36.620 "bdev_opal_get_info", 00:59:36.620 "bdev_opal_create", 00:59:36.620 "bdev_nvme_opal_revert", 00:59:36.620 "bdev_nvme_opal_init", 00:59:36.620 "bdev_nvme_send_cmd", 00:59:36.620 "bdev_nvme_set_keys", 00:59:36.620 "bdev_nvme_get_path_iostat", 00:59:36.620 "bdev_nvme_get_mdns_discovery_info", 00:59:36.620 "bdev_nvme_stop_mdns_discovery", 00:59:36.620 "bdev_nvme_start_mdns_discovery", 00:59:36.620 "bdev_nvme_set_multipath_policy", 00:59:36.620 "bdev_nvme_set_preferred_path", 00:59:36.620 "bdev_nvme_get_io_paths", 00:59:36.620 "bdev_nvme_remove_error_injection", 00:59:36.620 "bdev_nvme_add_error_injection", 00:59:36.620 "bdev_nvme_get_discovery_info", 00:59:36.620 "bdev_nvme_stop_discovery", 00:59:36.620 "bdev_nvme_start_discovery", 00:59:36.620 "bdev_nvme_get_controller_health_info", 00:59:36.620 "bdev_nvme_disable_controller", 00:59:36.620 "bdev_nvme_enable_controller", 00:59:36.620 "bdev_nvme_reset_controller", 00:59:36.620 "bdev_nvme_get_transport_statistics", 00:59:36.620 "bdev_nvme_apply_firmware", 00:59:36.620 "bdev_nvme_detach_controller", 00:59:36.620 "bdev_nvme_get_controllers", 00:59:36.620 "bdev_nvme_attach_controller", 00:59:36.620 "bdev_nvme_set_hotplug", 00:59:36.620 "bdev_nvme_set_options", 00:59:36.620 "bdev_passthru_delete", 00:59:36.620 "bdev_passthru_create", 00:59:36.620 "bdev_lvol_set_parent_bdev", 00:59:36.620 "bdev_lvol_set_parent", 00:59:36.620 "bdev_lvol_check_shallow_copy", 00:59:36.620 "bdev_lvol_start_shallow_copy", 00:59:36.620 "bdev_lvol_grow_lvstore", 00:59:36.620 "bdev_lvol_get_lvols", 00:59:36.620 "bdev_lvol_get_lvstores", 00:59:36.620 "bdev_lvol_delete", 00:59:36.620 "bdev_lvol_set_read_only", 00:59:36.620 "bdev_lvol_resize", 00:59:36.620 "bdev_lvol_decouple_parent", 00:59:36.620 "bdev_lvol_inflate", 00:59:36.620 "bdev_lvol_rename", 00:59:36.620 "bdev_lvol_clone_bdev", 00:59:36.620 "bdev_lvol_clone", 00:59:36.620 "bdev_lvol_snapshot", 00:59:36.620 "bdev_lvol_create", 00:59:36.620 "bdev_lvol_delete_lvstore", 00:59:36.620 "bdev_lvol_rename_lvstore", 00:59:36.620 "bdev_lvol_create_lvstore", 00:59:36.620 "bdev_raid_set_options", 00:59:36.620 "bdev_raid_remove_base_bdev", 00:59:36.620 "bdev_raid_add_base_bdev", 00:59:36.620 "bdev_raid_delete", 00:59:36.620 "bdev_raid_create", 00:59:36.620 "bdev_raid_get_bdevs", 00:59:36.620 "bdev_error_inject_error", 00:59:36.620 "bdev_error_delete", 00:59:36.620 "bdev_error_create", 00:59:36.620 "bdev_split_delete", 00:59:36.620 "bdev_split_create", 00:59:36.620 "bdev_delay_delete", 00:59:36.620 "bdev_delay_create", 00:59:36.620 "bdev_delay_update_latency", 00:59:36.620 "bdev_zone_block_delete", 00:59:36.620 "bdev_zone_block_create", 00:59:36.620 "blobfs_create", 00:59:36.620 "blobfs_detect", 00:59:36.620 "blobfs_set_cache_size", 00:59:36.620 "bdev_aio_delete", 00:59:36.620 "bdev_aio_rescan", 00:59:36.620 "bdev_aio_create", 00:59:36.620 "bdev_ftl_set_property", 00:59:36.620 "bdev_ftl_get_properties", 00:59:36.620 "bdev_ftl_get_stats", 00:59:36.620 "bdev_ftl_unmap", 00:59:36.620 "bdev_ftl_unload", 00:59:36.620 "bdev_ftl_delete", 00:59:36.620 "bdev_ftl_load", 00:59:36.621 "bdev_ftl_create", 00:59:36.621 "bdev_virtio_attach_controller", 00:59:36.621 "bdev_virtio_scsi_get_devices", 00:59:36.621 "bdev_virtio_detach_controller", 00:59:36.621 "bdev_virtio_blk_set_hotplug", 00:59:36.621 "bdev_iscsi_delete", 00:59:36.621 "bdev_iscsi_create", 00:59:36.621 "bdev_iscsi_set_options", 00:59:36.621 "bdev_uring_delete", 00:59:36.621 "bdev_uring_rescan", 00:59:36.621 "bdev_uring_create", 00:59:36.621 "accel_error_inject_error", 00:59:36.621 "ioat_scan_accel_module", 00:59:36.621 "dsa_scan_accel_module", 00:59:36.621 "iaa_scan_accel_module", 00:59:36.621 "keyring_file_remove_key", 00:59:36.621 "keyring_file_add_key", 00:59:36.621 "keyring_linux_set_options", 00:59:36.621 "fsdev_aio_delete", 00:59:36.621 "fsdev_aio_create", 00:59:36.621 "iscsi_get_histogram", 00:59:36.621 "iscsi_enable_histogram", 00:59:36.621 "iscsi_set_options", 00:59:36.621 "iscsi_get_auth_groups", 00:59:36.621 "iscsi_auth_group_remove_secret", 00:59:36.621 "iscsi_auth_group_add_secret", 00:59:36.621 "iscsi_delete_auth_group", 00:59:36.621 "iscsi_create_auth_group", 00:59:36.621 "iscsi_set_discovery_auth", 00:59:36.621 "iscsi_get_options", 00:59:36.621 "iscsi_target_node_request_logout", 00:59:36.621 "iscsi_target_node_set_redirect", 00:59:36.621 "iscsi_target_node_set_auth", 00:59:36.621 "iscsi_target_node_add_lun", 00:59:36.621 "iscsi_get_stats", 00:59:36.621 "iscsi_get_connections", 00:59:36.621 "iscsi_portal_group_set_auth", 00:59:36.621 "iscsi_start_portal_group", 00:59:36.621 "iscsi_delete_portal_group", 00:59:36.621 "iscsi_create_portal_group", 00:59:36.621 "iscsi_get_portal_groups", 00:59:36.621 "iscsi_delete_target_node", 00:59:36.621 "iscsi_target_node_remove_pg_ig_maps", 00:59:36.621 "iscsi_target_node_add_pg_ig_maps", 00:59:36.621 "iscsi_create_target_node", 00:59:36.621 "iscsi_get_target_nodes", 00:59:36.621 "iscsi_delete_initiator_group", 00:59:36.621 "iscsi_initiator_group_remove_initiators", 00:59:36.621 "iscsi_initiator_group_add_initiators", 00:59:36.621 "iscsi_create_initiator_group", 00:59:36.621 "iscsi_get_initiator_groups", 00:59:36.621 "nvmf_set_crdt", 00:59:36.621 "nvmf_set_config", 00:59:36.621 "nvmf_set_max_subsystems", 00:59:36.621 "nvmf_stop_mdns_prr", 00:59:36.621 "nvmf_publish_mdns_prr", 00:59:36.621 "nvmf_subsystem_get_listeners", 00:59:36.621 "nvmf_subsystem_get_qpairs", 00:59:36.621 "nvmf_subsystem_get_controllers", 00:59:36.621 "nvmf_get_stats", 00:59:36.621 "nvmf_get_transports", 00:59:36.621 "nvmf_create_transport", 00:59:36.621 "nvmf_get_targets", 00:59:36.621 "nvmf_delete_target", 00:59:36.621 "nvmf_create_target", 00:59:36.621 "nvmf_subsystem_allow_any_host", 00:59:36.621 "nvmf_subsystem_set_keys", 00:59:36.621 "nvmf_subsystem_remove_host", 00:59:36.621 "nvmf_subsystem_add_host", 00:59:36.621 "nvmf_ns_remove_host", 00:59:36.621 "nvmf_ns_add_host", 00:59:36.621 "nvmf_subsystem_remove_ns", 00:59:36.621 "nvmf_subsystem_set_ns_ana_group", 00:59:36.621 "nvmf_subsystem_add_ns", 00:59:36.621 "nvmf_subsystem_listener_set_ana_state", 00:59:36.621 "nvmf_discovery_get_referrals", 00:59:36.621 "nvmf_discovery_remove_referral", 00:59:36.621 "nvmf_discovery_add_referral", 00:59:36.621 "nvmf_subsystem_remove_listener", 00:59:36.621 "nvmf_subsystem_add_listener", 00:59:36.621 "nvmf_delete_subsystem", 00:59:36.621 "nvmf_create_subsystem", 00:59:36.621 "nvmf_get_subsystems", 00:59:36.621 "env_dpdk_get_mem_stats", 00:59:36.621 "nbd_get_disks", 00:59:36.621 "nbd_stop_disk", 00:59:36.621 "nbd_start_disk", 00:59:36.621 "ublk_recover_disk", 00:59:36.621 "ublk_get_disks", 00:59:36.621 "ublk_stop_disk", 00:59:36.621 "ublk_start_disk", 00:59:36.621 "ublk_destroy_target", 00:59:36.621 "ublk_create_target", 00:59:36.621 "virtio_blk_create_transport", 00:59:36.621 "virtio_blk_get_transports", 00:59:36.621 "vhost_controller_set_coalescing", 00:59:36.621 "vhost_get_controllers", 00:59:36.621 "vhost_delete_controller", 00:59:36.621 "vhost_create_blk_controller", 00:59:36.621 "vhost_scsi_controller_remove_target", 00:59:36.621 "vhost_scsi_controller_add_target", 00:59:36.621 "vhost_start_scsi_controller", 00:59:36.621 "vhost_create_scsi_controller", 00:59:36.621 "thread_set_cpumask", 00:59:36.621 "scheduler_set_options", 00:59:36.621 "framework_get_governor", 00:59:36.621 "framework_get_scheduler", 00:59:36.621 "framework_set_scheduler", 00:59:36.621 "framework_get_reactors", 00:59:36.621 "thread_get_io_channels", 00:59:36.621 "thread_get_pollers", 00:59:36.621 "thread_get_stats", 00:59:36.621 "framework_monitor_context_switch", 00:59:36.621 "spdk_kill_instance", 00:59:36.621 "log_enable_timestamps", 00:59:36.621 "log_get_flags", 00:59:36.621 "log_clear_flag", 00:59:36.621 "log_set_flag", 00:59:36.621 "log_get_level", 00:59:36.621 "log_set_level", 00:59:36.621 "log_get_print_level", 00:59:36.621 "log_set_print_level", 00:59:36.621 "framework_enable_cpumask_locks", 00:59:36.621 "framework_disable_cpumask_locks", 00:59:36.621 "framework_wait_init", 00:59:36.621 "framework_start_init", 00:59:36.621 "scsi_get_devices", 00:59:36.621 "bdev_get_histogram", 00:59:36.621 "bdev_enable_histogram", 00:59:36.621 "bdev_set_qos_limit", 00:59:36.621 "bdev_set_qd_sampling_period", 00:59:36.621 "bdev_get_bdevs", 00:59:36.621 "bdev_reset_iostat", 00:59:36.621 "bdev_get_iostat", 00:59:36.621 "bdev_examine", 00:59:36.621 "bdev_wait_for_examine", 00:59:36.621 "bdev_set_options", 00:59:36.621 "accel_get_stats", 00:59:36.621 "accel_set_options", 00:59:36.621 "accel_set_driver", 00:59:36.621 "accel_crypto_key_destroy", 00:59:36.621 "accel_crypto_keys_get", 00:59:36.621 "accel_crypto_key_create", 00:59:36.621 "accel_assign_opc", 00:59:36.621 "accel_get_module_info", 00:59:36.621 "accel_get_opc_assignments", 00:59:36.621 "vmd_rescan", 00:59:36.621 "vmd_remove_device", 00:59:36.621 "vmd_enable", 00:59:36.621 "sock_get_default_impl", 00:59:36.621 "sock_set_default_impl", 00:59:36.621 "sock_impl_set_options", 00:59:36.621 "sock_impl_get_options", 00:59:36.621 "iobuf_get_stats", 00:59:36.622 "iobuf_set_options", 00:59:36.622 "keyring_get_keys", 00:59:36.622 "framework_get_pci_devices", 00:59:36.622 "framework_get_config", 00:59:36.622 "framework_get_subsystems", 00:59:36.622 "fsdev_set_opts", 00:59:36.622 "fsdev_get_opts", 00:59:36.622 "trace_get_info", 00:59:36.622 "trace_get_tpoint_group_mask", 00:59:36.622 "trace_disable_tpoint_group", 00:59:36.622 "trace_enable_tpoint_group", 00:59:36.622 "trace_clear_tpoint_mask", 00:59:36.622 "trace_set_tpoint_mask", 00:59:36.622 "notify_get_notifications", 00:59:36.622 "notify_get_types", 00:59:36.622 "spdk_get_version", 00:59:36.622 "rpc_get_methods" 00:59:36.622 ] 00:59:36.622 05:58:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:59:36.622 05:58:31 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:36.622 05:58:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:36.622 05:58:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:59:36.622 05:58:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57686 00:59:36.622 05:58:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57686 ']' 00:59:36.622 05:58:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57686 00:59:36.622 05:58:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:59:36.622 05:58:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:36.622 05:58:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57686 00:59:36.885 killing process with pid 57686 00:59:36.885 05:58:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:36.885 05:58:31 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:36.885 05:58:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57686' 00:59:36.885 05:58:31 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57686 00:59:36.885 05:58:31 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57686 00:59:37.144 ************************************ 00:59:37.144 END TEST spdkcli_tcp 00:59:37.144 ************************************ 00:59:37.144 00:59:37.144 real 0m1.807s 00:59:37.144 user 0m3.121s 00:59:37.144 sys 0m0.538s 00:59:37.144 05:58:31 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:37.144 05:58:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:37.144 05:58:31 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:59:37.144 05:58:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:37.144 05:58:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:37.144 05:58:31 -- common/autotest_common.sh@10 -- # set +x 00:59:37.144 ************************************ 00:59:37.144 START TEST dpdk_mem_utility 00:59:37.144 ************************************ 00:59:37.144 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:59:37.403 * Looking for test storage... 00:59:37.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:37.403 05:58:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:37.403 --rc genhtml_branch_coverage=1 00:59:37.403 --rc genhtml_function_coverage=1 00:59:37.403 --rc genhtml_legend=1 00:59:37.403 --rc geninfo_all_blocks=1 00:59:37.403 --rc geninfo_unexecuted_blocks=1 00:59:37.403 00:59:37.403 ' 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:37.403 --rc genhtml_branch_coverage=1 00:59:37.403 --rc genhtml_function_coverage=1 00:59:37.403 --rc genhtml_legend=1 00:59:37.403 --rc geninfo_all_blocks=1 00:59:37.403 --rc geninfo_unexecuted_blocks=1 00:59:37.403 00:59:37.403 ' 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:37.403 --rc genhtml_branch_coverage=1 00:59:37.403 --rc genhtml_function_coverage=1 00:59:37.403 --rc genhtml_legend=1 00:59:37.403 --rc geninfo_all_blocks=1 00:59:37.403 --rc geninfo_unexecuted_blocks=1 00:59:37.403 00:59:37.403 ' 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:37.403 --rc genhtml_branch_coverage=1 00:59:37.403 --rc genhtml_function_coverage=1 00:59:37.403 --rc genhtml_legend=1 00:59:37.403 --rc geninfo_all_blocks=1 00:59:37.403 --rc geninfo_unexecuted_blocks=1 00:59:37.403 00:59:37.403 ' 00:59:37.403 05:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:59:37.403 05:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57785 00:59:37.403 05:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:37.403 05:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57785 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57785 ']' 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:37.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:37.403 05:58:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:59:37.403 [2024-12-09 05:58:31.911667] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:37.403 [2024-12-09 05:58:31.911861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57785 ] 00:59:37.662 [2024-12-09 05:58:32.062145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:37.662 [2024-12-09 05:58:32.101703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:37.662 [2024-12-09 05:58:32.156501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:38.283 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:38.283 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:59:38.283 05:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:59:38.283 05:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:59:38.283 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.283 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:59:38.283 { 00:59:38.283 "filename": "/tmp/spdk_mem_dump.txt" 00:59:38.283 } 00:59:38.283 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.283 05:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:59:38.283 DPDK memory size 818.000000 MiB in 1 heap(s) 00:59:38.283 1 heaps totaling size 818.000000 MiB 00:59:38.283 size: 818.000000 MiB heap id: 0 00:59:38.283 end heaps---------- 00:59:38.283 9 mempools totaling size 603.782043 MiB 00:59:38.283 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:59:38.283 size: 158.602051 MiB name: PDU_data_out_Pool 00:59:38.283 size: 100.555481 MiB name: bdev_io_57785 00:59:38.284 size: 50.003479 MiB name: msgpool_57785 00:59:38.284 size: 36.509338 MiB name: fsdev_io_57785 00:59:38.284 size: 21.763794 MiB name: PDU_Pool 00:59:38.284 size: 19.513306 MiB name: SCSI_TASK_Pool 00:59:38.284 size: 4.133484 MiB name: evtpool_57785 00:59:38.284 size: 0.026123 MiB name: Session_Pool 00:59:38.284 end mempools------- 00:59:38.284 6 memzones totaling size 4.142822 MiB 00:59:38.284 size: 1.000366 MiB name: RG_ring_0_57785 00:59:38.284 size: 1.000366 MiB name: RG_ring_1_57785 00:59:38.284 size: 1.000366 MiB name: RG_ring_4_57785 00:59:38.284 size: 1.000366 MiB name: RG_ring_5_57785 00:59:38.284 size: 0.125366 MiB name: RG_ring_2_57785 00:59:38.284 size: 0.015991 MiB name: RG_ring_3_57785 00:59:38.284 end memzones------- 00:59:38.284 05:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:59:38.560 heap id: 0 total size: 818.000000 MiB number of busy elements: 312 number of free elements: 15 00:59:38.560 list of free elements. size: 10.803406 MiB 00:59:38.560 element at address: 0x200019200000 with size: 0.999878 MiB 00:59:38.560 element at address: 0x200019400000 with size: 0.999878 MiB 00:59:38.560 element at address: 0x200032000000 with size: 0.994446 MiB 00:59:38.560 element at address: 0x200000400000 with size: 0.993958 MiB 00:59:38.560 element at address: 0x200006400000 with size: 0.959839 MiB 00:59:38.560 element at address: 0x200012c00000 with size: 0.944275 MiB 00:59:38.560 element at address: 0x200019600000 with size: 0.936584 MiB 00:59:38.560 element at address: 0x200000200000 with size: 0.717346 MiB 00:59:38.560 element at address: 0x20001ae00000 with size: 0.567871 MiB 00:59:38.560 element at address: 0x20000a600000 with size: 0.488892 MiB 00:59:38.560 element at address: 0x200000c00000 with size: 0.486267 MiB 00:59:38.560 element at address: 0x200019800000 with size: 0.485657 MiB 00:59:38.560 element at address: 0x200003e00000 with size: 0.480286 MiB 00:59:38.560 element at address: 0x200028200000 with size: 0.396484 MiB 00:59:38.560 element at address: 0x200000800000 with size: 0.351746 MiB 00:59:38.560 list of standard malloc elements. size: 199.267700 MiB 00:59:38.560 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:59:38.560 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:59:38.560 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:59:38.560 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:59:38.560 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:59:38.560 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:59:38.560 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:59:38.560 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:59:38.560 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:59:38.560 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:59:38.560 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:59:38.560 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:59:38.560 element at address: 0x20000085e580 with size: 0.000183 MiB 00:59:38.560 element at address: 0x20000087e840 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087e900 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087f080 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087f140 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087f200 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087f380 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087f440 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087f500 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000087f680 with size: 0.000183 MiB 00:59:38.561 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:59:38.561 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000cff000 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200003efb980 with size: 0.000183 MiB 00:59:38.561 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:59:38.561 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:59:38.562 element at address: 0x200028265800 with size: 0.000183 MiB 00:59:38.562 element at address: 0x2000282658c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826c4c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826c780 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826c840 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826c900 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d080 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d140 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d200 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d380 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d440 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d500 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d680 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d740 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d800 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826d980 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826da40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826db00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826de00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826df80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e040 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e100 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e280 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e340 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e400 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e580 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e640 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e700 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e880 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826e940 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f000 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f180 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f240 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f300 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f480 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f540 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f600 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f780 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f840 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f900 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:59:38.562 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:59:38.562 list of memzone associated elements. size: 607.928894 MiB 00:59:38.562 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:59:38.562 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:59:38.562 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:59:38.562 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:59:38.562 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:59:38.562 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57785_0 00:59:38.562 element at address: 0x200000dff380 with size: 48.003052 MiB 00:59:38.562 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57785_0 00:59:38.562 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:59:38.562 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57785_0 00:59:38.562 element at address: 0x2000199be940 with size: 20.255554 MiB 00:59:38.562 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:59:38.562 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:59:38.562 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:59:38.562 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:59:38.562 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57785_0 00:59:38.562 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:59:38.562 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57785 00:59:38.562 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:59:38.562 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57785 00:59:38.562 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:59:38.562 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:59:38.562 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:59:38.562 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:59:38.562 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:59:38.562 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:59:38.562 element at address: 0x200003efba40 with size: 1.008118 MiB 00:59:38.562 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:59:38.562 element at address: 0x200000cff180 with size: 1.000488 MiB 00:59:38.562 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57785 00:59:38.562 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:59:38.562 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57785 00:59:38.562 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:59:38.562 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57785 00:59:38.562 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:59:38.562 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57785 00:59:38.562 element at address: 0x20000087f740 with size: 0.500488 MiB 00:59:38.562 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57785 00:59:38.563 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:59:38.563 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57785 00:59:38.563 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:59:38.563 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:59:38.563 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:59:38.563 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:59:38.563 element at address: 0x20001987c540 with size: 0.250488 MiB 00:59:38.563 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:59:38.563 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:59:38.563 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57785 00:59:38.563 element at address: 0x20000085e640 with size: 0.125488 MiB 00:59:38.563 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57785 00:59:38.563 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:59:38.563 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:59:38.563 element at address: 0x200028265980 with size: 0.023743 MiB 00:59:38.563 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:59:38.563 element at address: 0x20000085a380 with size: 0.016113 MiB 00:59:38.563 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57785 00:59:38.563 element at address: 0x20002826bac0 with size: 0.002441 MiB 00:59:38.563 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:59:38.563 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:59:38.563 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57785 00:59:38.563 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:59:38.563 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57785 00:59:38.563 element at address: 0x20000085a180 with size: 0.000305 MiB 00:59:38.563 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57785 00:59:38.563 element at address: 0x20002826c580 with size: 0.000305 MiB 00:59:38.563 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:59:38.563 05:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:59:38.563 05:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57785 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57785 ']' 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57785 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57785 00:59:38.563 killing process with pid 57785 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57785' 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57785 00:59:38.563 05:58:32 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57785 00:59:38.823 00:59:38.823 real 0m1.658s 00:59:38.823 user 0m1.642s 00:59:38.823 sys 0m0.490s 00:59:38.823 05:58:33 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:38.823 05:58:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:59:38.823 ************************************ 00:59:38.823 END TEST dpdk_mem_utility 00:59:38.823 ************************************ 00:59:38.823 05:58:33 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:59:38.823 05:58:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:38.823 05:58:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:38.823 05:58:33 -- common/autotest_common.sh@10 -- # set +x 00:59:38.823 ************************************ 00:59:38.823 START TEST event 00:59:38.823 ************************************ 00:59:38.823 05:58:33 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:59:39.084 * Looking for test storage... 00:59:39.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1711 -- # lcov --version 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:39.084 05:58:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:39.084 05:58:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:39.084 05:58:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:39.084 05:58:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:59:39.084 05:58:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:59:39.084 05:58:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:59:39.084 05:58:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:59:39.084 05:58:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:59:39.084 05:58:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:59:39.084 05:58:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:59:39.084 05:58:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:39.084 05:58:33 event -- scripts/common.sh@344 -- # case "$op" in 00:59:39.084 05:58:33 event -- scripts/common.sh@345 -- # : 1 00:59:39.084 05:58:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:39.084 05:58:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:39.084 05:58:33 event -- scripts/common.sh@365 -- # decimal 1 00:59:39.084 05:58:33 event -- scripts/common.sh@353 -- # local d=1 00:59:39.084 05:58:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:39.084 05:58:33 event -- scripts/common.sh@355 -- # echo 1 00:59:39.084 05:58:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:59:39.084 05:58:33 event -- scripts/common.sh@366 -- # decimal 2 00:59:39.084 05:58:33 event -- scripts/common.sh@353 -- # local d=2 00:59:39.084 05:58:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:39.084 05:58:33 event -- scripts/common.sh@355 -- # echo 2 00:59:39.084 05:58:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:59:39.084 05:58:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:39.084 05:58:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:39.084 05:58:33 event -- scripts/common.sh@368 -- # return 0 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:39.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:39.084 --rc genhtml_branch_coverage=1 00:59:39.084 --rc genhtml_function_coverage=1 00:59:39.084 --rc genhtml_legend=1 00:59:39.084 --rc geninfo_all_blocks=1 00:59:39.084 --rc geninfo_unexecuted_blocks=1 00:59:39.084 00:59:39.084 ' 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:39.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:39.084 --rc genhtml_branch_coverage=1 00:59:39.084 --rc genhtml_function_coverage=1 00:59:39.084 --rc genhtml_legend=1 00:59:39.084 --rc geninfo_all_blocks=1 00:59:39.084 --rc geninfo_unexecuted_blocks=1 00:59:39.084 00:59:39.084 ' 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:39.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:39.084 --rc genhtml_branch_coverage=1 00:59:39.084 --rc genhtml_function_coverage=1 00:59:39.084 --rc genhtml_legend=1 00:59:39.084 --rc geninfo_all_blocks=1 00:59:39.084 --rc geninfo_unexecuted_blocks=1 00:59:39.084 00:59:39.084 ' 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:39.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:39.084 --rc genhtml_branch_coverage=1 00:59:39.084 --rc genhtml_function_coverage=1 00:59:39.084 --rc genhtml_legend=1 00:59:39.084 --rc geninfo_all_blocks=1 00:59:39.084 --rc geninfo_unexecuted_blocks=1 00:59:39.084 00:59:39.084 ' 00:59:39.084 05:58:33 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:59:39.084 05:58:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:59:39.084 05:58:33 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:59:39.084 05:58:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:39.084 05:58:33 event -- common/autotest_common.sh@10 -- # set +x 00:59:39.084 ************************************ 00:59:39.084 START TEST event_perf 00:59:39.084 ************************************ 00:59:39.084 05:58:33 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:59:39.084 Running I/O for 1 seconds...[2024-12-09 05:58:33.623626] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:39.084 [2024-12-09 05:58:33.623879] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57870 ] 00:59:39.344 [2024-12-09 05:58:33.778388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:59:39.344 [2024-12-09 05:58:33.826886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:39.344 [2024-12-09 05:58:33.827075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:59:39.344 [2024-12-09 05:58:33.827249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:39.344 Running I/O for 1 seconds...[2024-12-09 05:58:33.827253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:59:40.283 00:59:40.283 lcore 0: 149862 00:59:40.283 lcore 1: 149859 00:59:40.283 lcore 2: 149862 00:59:40.283 lcore 3: 149865 00:59:40.283 done. 00:59:40.543 00:59:40.543 real 0m1.272s 00:59:40.543 user 0m4.091s 00:59:40.543 sys 0m0.056s 00:59:40.543 05:58:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:40.543 ************************************ 00:59:40.543 END TEST event_perf 00:59:40.543 ************************************ 00:59:40.543 05:58:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:59:40.543 05:58:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:59:40.543 05:58:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:59:40.543 05:58:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:40.543 05:58:34 event -- common/autotest_common.sh@10 -- # set +x 00:59:40.543 ************************************ 00:59:40.543 START TEST event_reactor 00:59:40.543 ************************************ 00:59:40.543 05:58:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:59:40.543 [2024-12-09 05:58:34.974866] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:40.543 [2024-12-09 05:58:34.975207] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57903 ] 00:59:40.812 [2024-12-09 05:58:35.131262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:40.813 [2024-12-09 05:58:35.176715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:41.751 test_start 00:59:41.751 oneshot 00:59:41.751 tick 100 00:59:41.751 tick 100 00:59:41.751 tick 250 00:59:41.751 tick 100 00:59:41.751 tick 100 00:59:41.751 tick 100 00:59:41.751 tick 250 00:59:41.751 tick 500 00:59:41.751 tick 100 00:59:41.751 tick 100 00:59:41.751 tick 250 00:59:41.751 tick 100 00:59:41.751 tick 100 00:59:41.751 test_end 00:59:41.751 00:59:41.751 real 0m1.264s 00:59:41.751 user 0m1.105s 00:59:41.751 sys 0m0.052s 00:59:41.751 05:58:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:41.751 05:58:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:59:41.751 ************************************ 00:59:41.751 END TEST event_reactor 00:59:41.751 ************************************ 00:59:41.751 05:58:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:59:41.751 05:58:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:59:41.751 05:58:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:41.751 05:58:36 event -- common/autotest_common.sh@10 -- # set +x 00:59:41.751 ************************************ 00:59:41.751 START TEST event_reactor_perf 00:59:41.751 ************************************ 00:59:41.751 05:58:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:59:41.751 [2024-12-09 05:58:36.318704] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:41.751 [2024-12-09 05:58:36.318798] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57933 ] 00:59:42.010 [2024-12-09 05:58:36.470650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:42.010 [2024-12-09 05:58:36.518471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:43.393 test_start 00:59:43.393 test_end 00:59:43.393 Performance: 519203 events per second 00:59:43.393 00:59:43.393 real 0m1.260s 00:59:43.393 user 0m1.100s 00:59:43.393 sys 0m0.054s 00:59:43.393 ************************************ 00:59:43.393 END TEST event_reactor_perf 00:59:43.393 ************************************ 00:59:43.393 05:58:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:43.393 05:58:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:59:43.393 05:58:37 event -- event/event.sh@49 -- # uname -s 00:59:43.393 05:58:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:59:43.393 05:58:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:59:43.393 05:58:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:43.393 05:58:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:43.393 05:58:37 event -- common/autotest_common.sh@10 -- # set +x 00:59:43.393 ************************************ 00:59:43.393 START TEST event_scheduler 00:59:43.393 ************************************ 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:59:43.393 * Looking for test storage... 00:59:43.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:43.393 05:58:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:59:43.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:43.393 --rc genhtml_branch_coverage=1 00:59:43.393 --rc genhtml_function_coverage=1 00:59:43.393 --rc genhtml_legend=1 00:59:43.393 --rc geninfo_all_blocks=1 00:59:43.393 --rc geninfo_unexecuted_blocks=1 00:59:43.393 00:59:43.393 ' 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:59:43.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:43.393 --rc genhtml_branch_coverage=1 00:59:43.393 --rc genhtml_function_coverage=1 00:59:43.393 --rc genhtml_legend=1 00:59:43.393 --rc geninfo_all_blocks=1 00:59:43.393 --rc geninfo_unexecuted_blocks=1 00:59:43.393 00:59:43.393 ' 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:59:43.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:43.393 --rc genhtml_branch_coverage=1 00:59:43.393 --rc genhtml_function_coverage=1 00:59:43.393 --rc genhtml_legend=1 00:59:43.393 --rc geninfo_all_blocks=1 00:59:43.393 --rc geninfo_unexecuted_blocks=1 00:59:43.393 00:59:43.393 ' 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:59:43.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:43.393 --rc genhtml_branch_coverage=1 00:59:43.393 --rc genhtml_function_coverage=1 00:59:43.393 --rc genhtml_legend=1 00:59:43.393 --rc geninfo_all_blocks=1 00:59:43.393 --rc geninfo_unexecuted_blocks=1 00:59:43.393 00:59:43.393 ' 00:59:43.393 05:58:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:59:43.393 05:58:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58008 00:59:43.393 05:58:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:59:43.393 05:58:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:59:43.393 05:58:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58008 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58008 ']' 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:43.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:43.393 05:58:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:59:43.393 [2024-12-09 05:58:37.919794] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:43.393 [2024-12-09 05:58:37.919968] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58008 ] 00:59:43.653 [2024-12-09 05:58:38.071027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:59:43.653 [2024-12-09 05:58:38.132992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:43.653 [2024-12-09 05:58:38.133171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:43.653 [2024-12-09 05:58:38.133331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:59:43.653 [2024-12-09 05:58:38.133333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:59:44.223 05:58:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:44.223 05:58:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:59:44.223 05:58:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:59:44.223 05:58:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.223 05:58:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:59:44.223 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:59:44.223 POWER: Cannot set governor of lcore 0 to userspace 00:59:44.223 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:59:44.223 POWER: Cannot set governor of lcore 0 to performance 00:59:44.223 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:59:44.223 POWER: Cannot set governor of lcore 0 to userspace 00:59:44.223 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:59:44.223 POWER: Cannot set governor of lcore 0 to userspace 00:59:44.223 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:59:44.223 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:59:44.223 POWER: Unable to set Power Management Environment for lcore 0 00:59:44.223 [2024-12-09 05:58:38.790516] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:59:44.223 [2024-12-09 05:58:38.790528] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:59:44.223 [2024-12-09 05:58:38.790537] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:59:44.223 [2024-12-09 05:58:38.790548] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:59:44.223 [2024-12-09 05:58:38.790555] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:59:44.223 [2024-12-09 05:58:38.790562] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:59:44.223 05:58:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.223 05:58:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:59:44.223 05:58:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.223 05:58:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 [2024-12-09 05:58:38.868923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:44.483 [2024-12-09 05:58:38.908203] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:59:44.483 05:58:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.483 05:58:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:59:44.483 05:58:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:44.483 05:58:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:44.483 05:58:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 ************************************ 00:59:44.483 START TEST scheduler_create_thread 00:59:44.483 ************************************ 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 2 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 3 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 4 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 5 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 6 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.483 05:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 7 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 8 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:44.483 9 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.483 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:45.050 10 00:59:45.050 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.050 05:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:59:45.050 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.050 05:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:46.425 05:58:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:46.425 05:58:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:59:46.425 05:58:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:59:46.425 05:58:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:46.425 05:58:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:47.361 05:58:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:47.361 05:58:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:59:47.361 05:58:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:47.361 05:58:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:47.928 05:58:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:47.928 05:58:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:59:47.928 05:58:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:59:47.928 05:58:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:47.928 05:58:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:48.866 ************************************ 00:59:48.866 END TEST scheduler_create_thread 00:59:48.866 ************************************ 00:59:48.866 05:58:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:48.866 00:59:48.866 real 0m4.209s 00:59:48.866 user 0m0.030s 00:59:48.866 sys 0m0.004s 00:59:48.866 05:58:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:48.866 05:58:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:59:48.866 05:58:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:59:48.866 05:58:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58008 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58008 ']' 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58008 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58008 00:59:48.866 killing process with pid 58008 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58008' 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58008 00:59:48.866 05:58:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58008 00:59:49.125 [2024-12-09 05:58:43.512951] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:59:49.385 00:59:49.385 real 0m6.192s 00:59:49.385 user 0m13.957s 00:59:49.385 sys 0m0.515s 00:59:49.385 05:58:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:49.385 ************************************ 00:59:49.385 END TEST event_scheduler 00:59:49.385 ************************************ 00:59:49.385 05:58:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:59:49.385 05:58:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:59:49.385 05:58:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:59:49.385 05:58:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:49.385 05:58:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:49.385 05:58:43 event -- common/autotest_common.sh@10 -- # set +x 00:59:49.385 ************************************ 00:59:49.385 START TEST app_repeat 00:59:49.385 ************************************ 00:59:49.385 05:58:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58124 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58124' 00:59:49.385 Process app_repeat pid: 58124 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:59:49.385 spdk_app_start Round 0 00:59:49.385 05:58:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58124 /var/tmp/spdk-nbd.sock 00:59:49.385 05:58:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58124 ']' 00:59:49.385 05:58:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:59:49.385 05:58:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:49.385 05:58:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:59:49.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:59:49.385 05:58:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:49.385 05:58:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:59:49.385 [2024-12-09 05:58:43.955407] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:59:49.385 [2024-12-09 05:58:43.955493] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58124 ] 00:59:49.645 [2024-12-09 05:58:44.110275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:59:49.645 [2024-12-09 05:58:44.151292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:49.645 [2024-12-09 05:58:44.151294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:49.645 [2024-12-09 05:58:44.193681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:50.582 05:58:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:50.582 05:58:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:59:50.582 05:58:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:59:50.582 Malloc0 00:59:50.582 05:58:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:59:50.841 Malloc1 00:59:50.841 05:58:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:59:50.841 05:58:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:59:51.100 /dev/nbd0 00:59:51.100 05:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:59:51.100 05:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:59:51.100 1+0 records in 00:59:51.100 1+0 records out 00:59:51.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330281 s, 12.4 MB/s 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:59:51.100 05:58:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:59:51.100 05:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:59:51.100 05:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:59:51.100 05:58:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:59:51.359 /dev/nbd1 00:59:51.359 05:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:59:51.359 05:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:59:51.359 1+0 records in 00:59:51.359 1+0 records out 00:59:51.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358069 s, 11.4 MB/s 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:59:51.359 05:58:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:59:51.359 05:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:59:51.359 05:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:59:51.359 05:58:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:59:51.359 05:58:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:51.359 05:58:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:59:51.618 { 00:59:51.618 "nbd_device": "/dev/nbd0", 00:59:51.618 "bdev_name": "Malloc0" 00:59:51.618 }, 00:59:51.618 { 00:59:51.618 "nbd_device": "/dev/nbd1", 00:59:51.618 "bdev_name": "Malloc1" 00:59:51.618 } 00:59:51.618 ]' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:59:51.618 { 00:59:51.618 "nbd_device": "/dev/nbd0", 00:59:51.618 "bdev_name": "Malloc0" 00:59:51.618 }, 00:59:51.618 { 00:59:51.618 "nbd_device": "/dev/nbd1", 00:59:51.618 "bdev_name": "Malloc1" 00:59:51.618 } 00:59:51.618 ]' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:59:51.618 /dev/nbd1' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:59:51.618 /dev/nbd1' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:59:51.618 256+0 records in 00:59:51.618 256+0 records out 00:59:51.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013505 s, 77.6 MB/s 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:59:51.618 256+0 records in 00:59:51.618 256+0 records out 00:59:51.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295641 s, 35.5 MB/s 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:59:51.618 256+0 records in 00:59:51.618 256+0 records out 00:59:51.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269881 s, 38.9 MB/s 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:59:51.618 05:58:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:59:51.877 05:58:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:52.135 05:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:59:52.394 05:58:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:59:52.394 05:58:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:59:52.653 05:58:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:59:52.912 [2024-12-09 05:58:47.333074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:59:52.912 [2024-12-09 05:58:47.381164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:52.912 [2024-12-09 05:58:47.381166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:52.912 [2024-12-09 05:58:47.451436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:52.912 [2024-12-09 05:58:47.451512] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:59:52.912 [2024-12-09 05:58:47.451523] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:59:56.205 spdk_app_start Round 1 00:59:56.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:59:56.205 05:58:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:59:56.205 05:58:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:59:56.205 05:58:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58124 /var/tmp/spdk-nbd.sock 00:59:56.205 05:58:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58124 ']' 00:59:56.205 05:58:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:59:56.205 05:58:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:56.205 05:58:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:59:56.205 05:58:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:56.205 05:58:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:59:56.205 05:58:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:56.205 05:58:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:59:56.205 05:58:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:59:56.205 Malloc0 00:59:56.205 05:58:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:59:56.205 Malloc1 00:59:56.205 05:58:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:59:56.205 05:58:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:59:56.464 /dev/nbd0 00:59:56.464 05:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:59:56.464 05:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:59:56.464 1+0 records in 00:59:56.464 1+0 records out 00:59:56.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412979 s, 9.9 MB/s 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:59:56.464 05:58:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:59:56.465 05:58:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:59:56.465 05:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:59:56.465 05:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:59:56.465 05:58:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:59:56.724 /dev/nbd1 00:59:56.724 05:58:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:59:56.724 05:58:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:59:56.724 1+0 records in 00:59:56.724 1+0 records out 00:59:56.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365441 s, 11.2 MB/s 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:59:56.724 05:58:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:59:56.724 05:58:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:59:56.724 05:58:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:59:56.724 05:58:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:59:56.724 05:58:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:56.724 05:58:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:59:56.983 { 00:59:56.983 "nbd_device": "/dev/nbd0", 00:59:56.983 "bdev_name": "Malloc0" 00:59:56.983 }, 00:59:56.983 { 00:59:56.983 "nbd_device": "/dev/nbd1", 00:59:56.983 "bdev_name": "Malloc1" 00:59:56.983 } 00:59:56.983 ]' 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:59:56.983 { 00:59:56.983 "nbd_device": "/dev/nbd0", 00:59:56.983 "bdev_name": "Malloc0" 00:59:56.983 }, 00:59:56.983 { 00:59:56.983 "nbd_device": "/dev/nbd1", 00:59:56.983 "bdev_name": "Malloc1" 00:59:56.983 } 00:59:56.983 ]' 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:59:56.983 /dev/nbd1' 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:59:56.983 /dev/nbd1' 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:59:56.983 05:58:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:59:56.983 256+0 records in 00:59:56.983 256+0 records out 00:59:56.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525209 s, 200 MB/s 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:59:56.984 256+0 records in 00:59:56.984 256+0 records out 00:59:56.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260381 s, 40.3 MB/s 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:59:56.984 256+0 records in 00:59:56.984 256+0 records out 00:59:56.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317344 s, 33.0 MB/s 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:59:56.984 05:58:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:59:57.243 05:58:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:59:57.501 05:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:59:57.501 05:58:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:59:57.760 05:58:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:59:57.760 05:58:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:59:58.019 05:58:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:59:58.277 [2024-12-09 05:58:52.727713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:59:58.277 [2024-12-09 05:58:52.774303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:58.277 [2024-12-09 05:58:52.774342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:58.277 [2024-12-09 05:58:52.849180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:59:58.277 [2024-12-09 05:58:52.849258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:59:58.277 [2024-12-09 05:58:52.849270] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:00:01.608 spdk_app_start Round 2 01:00:01.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:00:01.608 05:58:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:00:01.608 05:58:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 01:00:01.608 05:58:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58124 /var/tmp/spdk-nbd.sock 01:00:01.608 05:58:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58124 ']' 01:00:01.608 05:58:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:00:01.608 05:58:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:01.608 05:58:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:00:01.608 05:58:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:01.608 05:58:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:00:01.608 05:58:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:01.608 05:58:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:00:01.608 05:58:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:00:01.608 Malloc0 01:00:01.608 05:58:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:00:01.608 Malloc1 01:00:01.608 05:58:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:00:01.608 05:58:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:00:01.867 /dev/nbd0 01:00:01.867 05:58:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:00:01.867 05:58:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:00:01.867 1+0 records in 01:00:01.867 1+0 records out 01:00:01.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840892 s, 4.9 MB/s 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:00:01.867 05:58:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:00:02.126 /dev/nbd1 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:00:02.126 1+0 records in 01:00:02.126 1+0 records out 01:00:02.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446157 s, 9.2 MB/s 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:00:02.126 05:58:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:02.126 05:58:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:00:02.386 05:58:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:00:02.386 { 01:00:02.386 "nbd_device": "/dev/nbd0", 01:00:02.386 "bdev_name": "Malloc0" 01:00:02.386 }, 01:00:02.386 { 01:00:02.386 "nbd_device": "/dev/nbd1", 01:00:02.386 "bdev_name": "Malloc1" 01:00:02.386 } 01:00:02.386 ]' 01:00:02.386 05:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:00:02.386 05:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:00:02.386 { 01:00:02.386 "nbd_device": "/dev/nbd0", 01:00:02.386 "bdev_name": "Malloc0" 01:00:02.386 }, 01:00:02.386 { 01:00:02.386 "nbd_device": "/dev/nbd1", 01:00:02.386 "bdev_name": "Malloc1" 01:00:02.386 } 01:00:02.386 ]' 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:00:02.387 /dev/nbd1' 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:00:02.387 /dev/nbd1' 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:00:02.387 256+0 records in 01:00:02.387 256+0 records out 01:00:02.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130833 s, 80.1 MB/s 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:00:02.387 05:58:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:00:02.646 256+0 records in 01:00:02.646 256+0 records out 01:00:02.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295518 s, 35.5 MB/s 01:00:02.646 05:58:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:00:02.646 256+0 records in 01:00:02.646 256+0 records out 01:00:02.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289384 s, 36.2 MB/s 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:00:02.646 05:58:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:00:02.906 05:58:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:00:02.907 05:58:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:00:02.907 05:58:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:00:02.907 05:58:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:00:02.907 05:58:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:00:02.907 05:58:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:00:02.907 05:58:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:00:02.907 05:58:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:00:03.165 05:58:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:00:03.165 05:58:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:00:03.424 05:58:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:00:03.683 [2024-12-09 05:58:58.205717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:00:03.683 [2024-12-09 05:58:58.252517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:03.683 [2024-12-09 05:58:58.252528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:03.942 [2024-12-09 05:58:58.323510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:03.942 [2024-12-09 05:58:58.323586] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:00:03.942 [2024-12-09 05:58:58.323601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:00:06.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:00:06.479 05:59:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58124 /var/tmp/spdk-nbd.sock 01:00:06.479 05:59:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58124 ']' 01:00:06.479 05:59:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:00:06.479 05:59:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:06.479 05:59:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:00:06.479 05:59:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:06.479 05:59:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:00:06.739 05:59:01 event.app_repeat -- event/event.sh@39 -- # killprocess 58124 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58124 ']' 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58124 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@959 -- # uname 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58124 01:00:06.739 killing process with pid 58124 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58124' 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58124 01:00:06.739 05:59:01 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58124 01:00:07.010 spdk_app_start is called in Round 0. 01:00:07.010 Shutdown signal received, stop current app iteration 01:00:07.010 Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 reinitialization... 01:00:07.010 spdk_app_start is called in Round 1. 01:00:07.010 Shutdown signal received, stop current app iteration 01:00:07.010 Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 reinitialization... 01:00:07.010 spdk_app_start is called in Round 2. 01:00:07.010 Shutdown signal received, stop current app iteration 01:00:07.010 Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 reinitialization... 01:00:07.010 spdk_app_start is called in Round 3. 01:00:07.010 Shutdown signal received, stop current app iteration 01:00:07.010 05:59:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 01:00:07.010 05:59:01 event.app_repeat -- event/event.sh@42 -- # return 0 01:00:07.010 ************************************ 01:00:07.010 END TEST app_repeat 01:00:07.010 ************************************ 01:00:07.010 01:00:07.010 real 0m17.573s 01:00:07.010 user 0m38.215s 01:00:07.010 sys 0m3.149s 01:00:07.010 05:59:01 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:07.010 05:59:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:00:07.010 05:59:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 01:00:07.010 05:59:01 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:00:07.010 05:59:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:07.010 05:59:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:07.010 05:59:01 event -- common/autotest_common.sh@10 -- # set +x 01:00:07.010 ************************************ 01:00:07.010 START TEST cpu_locks 01:00:07.010 ************************************ 01:00:07.010 05:59:01 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:00:07.294 * Looking for test storage... 01:00:07.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:07.294 05:59:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:07.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:07.294 --rc genhtml_branch_coverage=1 01:00:07.294 --rc genhtml_function_coverage=1 01:00:07.294 --rc genhtml_legend=1 01:00:07.294 --rc geninfo_all_blocks=1 01:00:07.294 --rc geninfo_unexecuted_blocks=1 01:00:07.294 01:00:07.294 ' 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:07.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:07.294 --rc genhtml_branch_coverage=1 01:00:07.294 --rc genhtml_function_coverage=1 01:00:07.294 --rc genhtml_legend=1 01:00:07.294 --rc geninfo_all_blocks=1 01:00:07.294 --rc geninfo_unexecuted_blocks=1 01:00:07.294 01:00:07.294 ' 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:07.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:07.294 --rc genhtml_branch_coverage=1 01:00:07.294 --rc genhtml_function_coverage=1 01:00:07.294 --rc genhtml_legend=1 01:00:07.294 --rc geninfo_all_blocks=1 01:00:07.294 --rc geninfo_unexecuted_blocks=1 01:00:07.294 01:00:07.294 ' 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:07.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:07.294 --rc genhtml_branch_coverage=1 01:00:07.294 --rc genhtml_function_coverage=1 01:00:07.294 --rc genhtml_legend=1 01:00:07.294 --rc geninfo_all_blocks=1 01:00:07.294 --rc geninfo_unexecuted_blocks=1 01:00:07.294 01:00:07.294 ' 01:00:07.294 05:59:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 01:00:07.294 05:59:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 01:00:07.294 05:59:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 01:00:07.294 05:59:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:07.294 05:59:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:00:07.294 ************************************ 01:00:07.294 START TEST default_locks 01:00:07.294 ************************************ 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58548 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58548 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58548 ']' 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:07.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:07.294 05:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:00:07.565 [2024-12-09 05:59:01.881573] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:07.565 [2024-12-09 05:59:01.881648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58548 ] 01:00:07.565 [2024-12-09 05:59:02.030220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:07.565 [2024-12-09 05:59:02.087719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:07.824 [2024-12-09 05:59:02.179383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:08.391 05:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:08.391 05:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 01:00:08.391 05:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58548 01:00:08.391 05:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58548 01:00:08.391 05:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58548 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58548 ']' 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58548 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58548 01:00:08.960 killing process with pid 58548 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58548' 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58548 01:00:08.960 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58548 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58548 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58548 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58548 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58548 ']' 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:09.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:00:09.529 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58548) - No such process 01:00:09.529 ERROR: process (pid: 58548) is no longer running 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:00:09.529 01:00:09.529 real 0m2.150s 01:00:09.529 user 0m2.105s 01:00:09.529 sys 0m0.814s 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:09.529 05:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:00:09.529 ************************************ 01:00:09.529 END TEST default_locks 01:00:09.529 ************************************ 01:00:09.529 05:59:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 01:00:09.529 05:59:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:09.529 05:59:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:09.529 05:59:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:00:09.529 ************************************ 01:00:09.529 START TEST default_locks_via_rpc 01:00:09.529 ************************************ 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58600 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58600 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58600 ']' 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:09.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:09.529 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:09.529 [2024-12-09 05:59:04.108551] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:09.529 [2024-12-09 05:59:04.108627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58600 ] 01:00:09.788 [2024-12-09 05:59:04.261417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:09.788 [2024-12-09 05:59:04.321427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:10.047 [2024-12-09 05:59:04.413525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58600 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58600 01:00:10.615 05:59:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58600 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58600 ']' 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58600 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58600 01:00:11.182 killing process with pid 58600 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58600' 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58600 01:00:11.182 05:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58600 01:00:11.440 01:00:11.440 real 0m1.971s 01:00:11.440 user 0m1.918s 01:00:11.440 sys 0m0.699s 01:00:11.440 05:59:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:11.440 ************************************ 01:00:11.440 END TEST default_locks_via_rpc 01:00:11.440 ************************************ 01:00:11.440 05:59:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:11.698 05:59:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 01:00:11.698 05:59:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:11.698 05:59:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:11.698 05:59:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:00:11.698 ************************************ 01:00:11.698 START TEST non_locking_app_on_locked_coremask 01:00:11.698 ************************************ 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58651 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58651 /var/tmp/spdk.sock 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58651 ']' 01:00:11.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:11.698 05:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:11.698 [2024-12-09 05:59:06.157244] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:11.698 [2024-12-09 05:59:06.157456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58651 ] 01:00:11.957 [2024-12-09 05:59:06.296160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:11.957 [2024-12-09 05:59:06.353164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:11.957 [2024-12-09 05:59:06.444899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:12.525 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:12.525 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:00:12.525 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 01:00:12.525 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58667 01:00:12.525 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58667 /var/tmp/spdk2.sock 01:00:12.525 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58667 ']' 01:00:12.525 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:00:12.525 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:12.525 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:00:12.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:00:12.526 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:12.526 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:12.526 [2024-12-09 05:59:07.066039] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:12.526 [2024-12-09 05:59:07.066249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58667 ] 01:00:12.785 [2024-12-09 05:59:07.210013] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:00:12.785 [2024-12-09 05:59:07.210050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:12.785 [2024-12-09 05:59:07.329752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:13.045 [2024-12-09 05:59:07.518669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:13.614 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:13.614 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:00:13.614 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58651 01:00:13.614 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58651 01:00:13.614 05:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58651 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58651 ']' 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58651 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58651 01:00:14.993 killing process with pid 58651 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58651' 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58651 01:00:14.993 05:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58651 01:00:15.932 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58667 01:00:15.932 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58667 ']' 01:00:15.932 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58667 01:00:15.932 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:00:15.932 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:15.932 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58667 01:00:15.932 killing process with pid 58667 01:00:15.932 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:15.932 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:15.933 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58667' 01:00:15.933 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58667 01:00:15.933 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58667 01:00:16.192 01:00:16.192 real 0m4.476s 01:00:16.192 user 0m4.650s 01:00:16.192 sys 0m1.595s 01:00:16.192 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:16.192 05:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:16.192 ************************************ 01:00:16.192 END TEST non_locking_app_on_locked_coremask 01:00:16.192 ************************************ 01:00:16.192 05:59:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 01:00:16.192 05:59:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:16.192 05:59:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:16.192 05:59:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:00:16.192 ************************************ 01:00:16.192 START TEST locking_app_on_unlocked_coremask 01:00:16.192 ************************************ 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58740 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58740 /var/tmp/spdk.sock 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58740 ']' 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:16.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:16.192 05:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:16.192 [2024-12-09 05:59:10.717210] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:16.192 [2024-12-09 05:59:10.717292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58740 ] 01:00:16.452 [2024-12-09 05:59:10.867923] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:00:16.452 [2024-12-09 05:59:10.868116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:16.452 [2024-12-09 05:59:10.907180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:16.452 [2024-12-09 05:59:10.961696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58756 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58756 /var/tmp/spdk2.sock 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58756 ']' 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:00:17.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:17.021 05:59:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:17.280 [2024-12-09 05:59:11.618194] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:17.280 [2024-12-09 05:59:11.618424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58756 ] 01:00:17.280 [2024-12-09 05:59:11.762524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:17.280 [2024-12-09 05:59:11.842379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:17.540 [2024-12-09 05:59:11.953197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:18.110 05:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:18.110 05:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:00:18.110 05:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58756 01:00:18.110 05:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58756 01:00:18.110 05:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58740 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58740 ']' 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58740 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58740 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:19.050 killing process with pid 58740 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58740' 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58740 01:00:19.050 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58740 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58756 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58756 ']' 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58756 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58756 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:19.621 killing process with pid 58756 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58756' 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58756 01:00:19.621 05:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58756 01:00:19.881 01:00:19.881 real 0m3.635s 01:00:19.881 user 0m3.949s 01:00:19.881 sys 0m1.033s 01:00:19.881 ************************************ 01:00:19.881 END TEST locking_app_on_unlocked_coremask 01:00:19.881 ************************************ 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:19.881 05:59:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 01:00:19.881 05:59:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:19.881 05:59:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:19.881 05:59:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:00:19.881 ************************************ 01:00:19.881 START TEST locking_app_on_locked_coremask 01:00:19.881 ************************************ 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58813 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58813 /var/tmp/spdk.sock 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58813 ']' 01:00:19.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:19.881 05:59:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:19.881 [2024-12-09 05:59:14.432023] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:19.881 [2024-12-09 05:59:14.432302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58813 ] 01:00:20.141 [2024-12-09 05:59:14.580816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:20.141 [2024-12-09 05:59:14.622315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:20.141 [2024-12-09 05:59:14.677573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58828 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58828 /var/tmp/spdk2.sock 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58828 /var/tmp/spdk2.sock 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:00:20.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58828 /var/tmp/spdk2.sock 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58828 ']' 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:20.712 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:20.970 [2024-12-09 05:59:15.329555] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:20.970 [2024-12-09 05:59:15.329812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58828 ] 01:00:20.971 [2024-12-09 05:59:15.480329] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58813 has claimed it. 01:00:20.971 [2024-12-09 05:59:15.480378] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:00:21.542 ERROR: process (pid: 58828) is no longer running 01:00:21.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58828) - No such process 01:00:21.542 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:21.542 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 01:00:21.542 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 01:00:21.542 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:21.542 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:21.542 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:21.542 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58813 01:00:21.542 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58813 01:00:21.542 05:59:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:00:22.110 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58813 01:00:22.110 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58813 ']' 01:00:22.110 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58813 01:00:22.110 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:00:22.110 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:22.110 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58813 01:00:22.368 killing process with pid 58813 01:00:22.368 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:22.368 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:22.368 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58813' 01:00:22.368 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58813 01:00:22.368 05:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58813 01:00:22.627 ************************************ 01:00:22.627 END TEST locking_app_on_locked_coremask 01:00:22.627 ************************************ 01:00:22.627 01:00:22.627 real 0m2.658s 01:00:22.627 user 0m2.959s 01:00:22.627 sys 0m0.782s 01:00:22.627 05:59:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:22.627 05:59:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:22.627 05:59:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 01:00:22.627 05:59:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:22.627 05:59:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:22.627 05:59:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:00:22.627 ************************************ 01:00:22.627 START TEST locking_overlapped_coremask 01:00:22.627 ************************************ 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58879 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58879 /var/tmp/spdk.sock 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58879 ']' 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:22.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:22.627 05:59:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:22.627 [2024-12-09 05:59:17.163532] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:22.627 [2024-12-09 05:59:17.163766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 01:00:22.886 [2024-12-09 05:59:17.312351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:00:22.886 [2024-12-09 05:59:17.353788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:22.886 [2024-12-09 05:59:17.353971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:22.886 [2024-12-09 05:59:17.353972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:00:22.886 [2024-12-09 05:59:17.409244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58897 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58897 /var/tmp/spdk2.sock 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58897 /var/tmp/spdk2.sock 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58897 /var/tmp/spdk2.sock 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58897 ']' 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:00:23.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:23.455 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:23.714 [2024-12-09 05:59:18.078786] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:23.714 [2024-12-09 05:59:18.078857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58897 ] 01:00:23.714 [2024-12-09 05:59:18.228531] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58879 has claimed it. 01:00:23.714 [2024-12-09 05:59:18.228579] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:00:24.280 ERROR: process (pid: 58897) is no longer running 01:00:24.280 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58897) - No such process 01:00:24.280 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58879 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58879 ']' 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58879 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58879 01:00:24.281 killing process with pid 58879 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58879' 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58879 01:00:24.281 05:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58879 01:00:24.539 01:00:24.539 real 0m1.989s 01:00:24.539 user 0m5.508s 01:00:24.539 sys 0m0.435s 01:00:24.539 05:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:24.539 ************************************ 01:00:24.539 END TEST locking_overlapped_coremask 01:00:24.539 ************************************ 01:00:24.539 05:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:00:24.798 05:59:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 01:00:24.798 05:59:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:24.798 05:59:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:24.798 05:59:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:00:24.798 ************************************ 01:00:24.798 START TEST locking_overlapped_coremask_via_rpc 01:00:24.798 ************************************ 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58937 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58937 /var/tmp/spdk.sock 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 01:00:24.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58937 ']' 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:24.798 05:59:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:24.798 [2024-12-09 05:59:19.236121] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:24.798 [2024-12-09 05:59:19.236192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58937 ] 01:00:25.057 [2024-12-09 05:59:19.385603] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:00:25.057 [2024-12-09 05:59:19.385762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:00:25.057 [2024-12-09 05:59:19.426958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:25.057 [2024-12-09 05:59:19.427154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:25.057 [2024-12-09 05:59:19.427156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:00:25.057 [2024-12-09 05:59:19.482431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58955 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58955 /var/tmp/spdk2.sock 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58955 ']' 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:00:25.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:25.625 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:25.625 [2024-12-09 05:59:20.126442] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:25.625 [2024-12-09 05:59:20.126683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 01:00:25.883 [2024-12-09 05:59:20.275392] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:00:25.883 [2024-12-09 05:59:20.275426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:00:25.883 [2024-12-09 05:59:20.362240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:00:25.883 [2024-12-09 05:59:20.362409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:00:25.883 [2024-12-09 05:59:20.362413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:00:25.883 [2024-12-09 05:59:20.467641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:00:26.451 05:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:26.451 [2024-12-09 05:59:21.012189] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58937 has claimed it. 01:00:26.451 request: 01:00:26.451 { 01:00:26.451 "method": "framework_enable_cpumask_locks", 01:00:26.451 "req_id": 1 01:00:26.451 } 01:00:26.451 Got JSON-RPC error response 01:00:26.451 response: 01:00:26.451 { 01:00:26.451 "code": -32603, 01:00:26.451 "message": "Failed to claim CPU core: 2" 01:00:26.451 } 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58937 /var/tmp/spdk.sock 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58937 ']' 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:26.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:26.451 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:26.710 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:26.710 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:00:26.710 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58955 /var/tmp/spdk2.sock 01:00:26.710 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58955 ']' 01:00:26.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:00:26.711 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:00:26.711 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:26.711 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:00:26.711 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:26.711 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:26.970 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:26.970 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:00:26.970 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 01:00:26.970 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:00:26.970 ************************************ 01:00:26.970 END TEST locking_overlapped_coremask_via_rpc 01:00:26.970 ************************************ 01:00:26.970 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:00:26.970 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:00:26.970 01:00:26.970 real 0m2.277s 01:00:26.970 user 0m0.982s 01:00:26.970 sys 0m0.227s 01:00:26.970 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:26.970 05:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:00:26.970 05:59:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 01:00:26.970 05:59:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58937 ]] 01:00:26.970 05:59:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58937 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58937 ']' 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58937 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58937 01:00:26.970 killing process with pid 58937 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58937' 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58937 01:00:26.970 05:59:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58937 01:00:27.538 05:59:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58955 ]] 01:00:27.538 05:59:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58955 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58955 ']' 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58955 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58955 01:00:27.538 killing process with pid 58955 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58955' 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58955 01:00:27.538 05:59:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58955 01:00:27.798 05:59:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:00:27.799 Process with pid 58937 is not found 01:00:27.799 05:59:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 01:00:27.799 05:59:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58937 ]] 01:00:27.799 05:59:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58937 01:00:27.799 05:59:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58937 ']' 01:00:27.799 05:59:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58937 01:00:27.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58937) - No such process 01:00:27.799 05:59:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58937 is not found' 01:00:27.799 05:59:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58955 ]] 01:00:27.799 05:59:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58955 01:00:27.799 05:59:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58955 ']' 01:00:27.799 05:59:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58955 01:00:27.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58955) - No such process 01:00:27.799 Process with pid 58955 is not found 01:00:27.799 05:59:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58955 is not found' 01:00:27.799 05:59:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:00:27.799 01:00:27.799 real 0m20.691s 01:00:27.799 user 0m33.331s 01:00:27.799 sys 0m6.599s 01:00:27.799 ************************************ 01:00:27.799 END TEST cpu_locks 01:00:27.799 ************************************ 01:00:27.799 05:59:22 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:27.799 05:59:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:00:27.799 ************************************ 01:00:27.799 END TEST event 01:00:27.799 ************************************ 01:00:27.799 01:00:27.799 real 0m48.971s 01:00:27.799 user 1m32.075s 01:00:27.799 sys 0m10.853s 01:00:27.799 05:59:22 event -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:27.799 05:59:22 event -- common/autotest_common.sh@10 -- # set +x 01:00:27.799 05:59:22 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:00:27.799 05:59:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:27.799 05:59:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:27.799 05:59:22 -- common/autotest_common.sh@10 -- # set +x 01:00:28.059 ************************************ 01:00:28.059 START TEST thread 01:00:28.059 ************************************ 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:00:28.059 * Looking for test storage... 01:00:28.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1711 -- # lcov --version 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:28.059 05:59:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:28.059 05:59:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:28.059 05:59:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:28.059 05:59:22 thread -- scripts/common.sh@336 -- # IFS=.-: 01:00:28.059 05:59:22 thread -- scripts/common.sh@336 -- # read -ra ver1 01:00:28.059 05:59:22 thread -- scripts/common.sh@337 -- # IFS=.-: 01:00:28.059 05:59:22 thread -- scripts/common.sh@337 -- # read -ra ver2 01:00:28.059 05:59:22 thread -- scripts/common.sh@338 -- # local 'op=<' 01:00:28.059 05:59:22 thread -- scripts/common.sh@340 -- # ver1_l=2 01:00:28.059 05:59:22 thread -- scripts/common.sh@341 -- # ver2_l=1 01:00:28.059 05:59:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:28.059 05:59:22 thread -- scripts/common.sh@344 -- # case "$op" in 01:00:28.059 05:59:22 thread -- scripts/common.sh@345 -- # : 1 01:00:28.059 05:59:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:28.059 05:59:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:28.059 05:59:22 thread -- scripts/common.sh@365 -- # decimal 1 01:00:28.059 05:59:22 thread -- scripts/common.sh@353 -- # local d=1 01:00:28.059 05:59:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:28.059 05:59:22 thread -- scripts/common.sh@355 -- # echo 1 01:00:28.059 05:59:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 01:00:28.059 05:59:22 thread -- scripts/common.sh@366 -- # decimal 2 01:00:28.059 05:59:22 thread -- scripts/common.sh@353 -- # local d=2 01:00:28.059 05:59:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:28.059 05:59:22 thread -- scripts/common.sh@355 -- # echo 2 01:00:28.059 05:59:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 01:00:28.059 05:59:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:28.059 05:59:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:28.059 05:59:22 thread -- scripts/common.sh@368 -- # return 0 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:28.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:28.059 --rc genhtml_branch_coverage=1 01:00:28.059 --rc genhtml_function_coverage=1 01:00:28.059 --rc genhtml_legend=1 01:00:28.059 --rc geninfo_all_blocks=1 01:00:28.059 --rc geninfo_unexecuted_blocks=1 01:00:28.059 01:00:28.059 ' 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:28.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:28.059 --rc genhtml_branch_coverage=1 01:00:28.059 --rc genhtml_function_coverage=1 01:00:28.059 --rc genhtml_legend=1 01:00:28.059 --rc geninfo_all_blocks=1 01:00:28.059 --rc geninfo_unexecuted_blocks=1 01:00:28.059 01:00:28.059 ' 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:28.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:28.059 --rc genhtml_branch_coverage=1 01:00:28.059 --rc genhtml_function_coverage=1 01:00:28.059 --rc genhtml_legend=1 01:00:28.059 --rc geninfo_all_blocks=1 01:00:28.059 --rc geninfo_unexecuted_blocks=1 01:00:28.059 01:00:28.059 ' 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:28.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:28.059 --rc genhtml_branch_coverage=1 01:00:28.059 --rc genhtml_function_coverage=1 01:00:28.059 --rc genhtml_legend=1 01:00:28.059 --rc geninfo_all_blocks=1 01:00:28.059 --rc geninfo_unexecuted_blocks=1 01:00:28.059 01:00:28.059 ' 01:00:28.059 05:59:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:28.059 05:59:22 thread -- common/autotest_common.sh@10 -- # set +x 01:00:28.318 ************************************ 01:00:28.318 START TEST thread_poller_perf 01:00:28.318 ************************************ 01:00:28.318 05:59:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:00:28.318 [2024-12-09 05:59:22.673261] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:28.318 [2024-12-09 05:59:22.673593] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59080 ] 01:00:28.318 [2024-12-09 05:59:22.826421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:28.318 [2024-12-09 05:59:22.873372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:28.318 Running 1000 pollers for 1 seconds with 1 microseconds period. 01:00:29.698 [2024-12-09T05:59:24.285Z] ====================================== 01:00:29.698 [2024-12-09T05:59:24.285Z] busy:2498646924 (cyc) 01:00:29.698 [2024-12-09T05:59:24.285Z] total_run_count: 437000 01:00:29.698 [2024-12-09T05:59:24.285Z] tsc_hz: 2490000000 (cyc) 01:00:29.698 [2024-12-09T05:59:24.285Z] ====================================== 01:00:29.698 [2024-12-09T05:59:24.285Z] poller_cost: 5717 (cyc), 2295 (nsec) 01:00:29.698 01:00:29.698 real 0m1.269s 01:00:29.698 user 0m1.106s 01:00:29.698 sys 0m0.056s 01:00:29.698 05:59:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:29.698 05:59:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:00:29.698 ************************************ 01:00:29.698 END TEST thread_poller_perf 01:00:29.698 ************************************ 01:00:29.698 05:59:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:00:29.698 05:59:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:00:29.698 05:59:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:29.698 05:59:23 thread -- common/autotest_common.sh@10 -- # set +x 01:00:29.698 ************************************ 01:00:29.698 START TEST thread_poller_perf 01:00:29.698 ************************************ 01:00:29.698 05:59:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:00:29.698 [2024-12-09 05:59:24.019381] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:29.698 [2024-12-09 05:59:24.019467] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59121 ] 01:00:29.698 [2024-12-09 05:59:24.174079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:29.698 Running 1000 pollers for 1 seconds with 0 microseconds period. 01:00:29.698 [2024-12-09 05:59:24.219241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:31.083 [2024-12-09T05:59:25.670Z] ====================================== 01:00:31.083 [2024-12-09T05:59:25.670Z] busy:2491919326 (cyc) 01:00:31.083 [2024-12-09T05:59:25.670Z] total_run_count: 5341000 01:00:31.083 [2024-12-09T05:59:25.670Z] tsc_hz: 2490000000 (cyc) 01:00:31.084 [2024-12-09T05:59:25.671Z] ====================================== 01:00:31.084 [2024-12-09T05:59:25.671Z] poller_cost: 466 (cyc), 187 (nsec) 01:00:31.084 ************************************ 01:00:31.084 END TEST thread_poller_perf 01:00:31.084 ************************************ 01:00:31.084 01:00:31.084 real 0m1.262s 01:00:31.084 user 0m1.103s 01:00:31.084 sys 0m0.052s 01:00:31.084 05:59:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:31.084 05:59:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:00:31.084 05:59:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 01:00:31.084 ************************************ 01:00:31.084 END TEST thread 01:00:31.084 ************************************ 01:00:31.084 01:00:31.084 real 0m2.926s 01:00:31.084 user 0m2.382s 01:00:31.084 sys 0m0.335s 01:00:31.084 05:59:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:31.084 05:59:25 thread -- common/autotest_common.sh@10 -- # set +x 01:00:31.084 05:59:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 01:00:31.084 05:59:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:00:31.084 05:59:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:31.084 05:59:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:31.084 05:59:25 -- common/autotest_common.sh@10 -- # set +x 01:00:31.084 ************************************ 01:00:31.084 START TEST app_cmdline 01:00:31.084 ************************************ 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:00:31.084 * Looking for test storage... 01:00:31.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@345 -- # : 1 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:31.084 05:59:25 app_cmdline -- scripts/common.sh@368 -- # return 0 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:31.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:31.084 --rc genhtml_branch_coverage=1 01:00:31.084 --rc genhtml_function_coverage=1 01:00:31.084 --rc genhtml_legend=1 01:00:31.084 --rc geninfo_all_blocks=1 01:00:31.084 --rc geninfo_unexecuted_blocks=1 01:00:31.084 01:00:31.084 ' 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:31.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:31.084 --rc genhtml_branch_coverage=1 01:00:31.084 --rc genhtml_function_coverage=1 01:00:31.084 --rc genhtml_legend=1 01:00:31.084 --rc geninfo_all_blocks=1 01:00:31.084 --rc geninfo_unexecuted_blocks=1 01:00:31.084 01:00:31.084 ' 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:31.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:31.084 --rc genhtml_branch_coverage=1 01:00:31.084 --rc genhtml_function_coverage=1 01:00:31.084 --rc genhtml_legend=1 01:00:31.084 --rc geninfo_all_blocks=1 01:00:31.084 --rc geninfo_unexecuted_blocks=1 01:00:31.084 01:00:31.084 ' 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:31.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:31.084 --rc genhtml_branch_coverage=1 01:00:31.084 --rc genhtml_function_coverage=1 01:00:31.084 --rc genhtml_legend=1 01:00:31.084 --rc geninfo_all_blocks=1 01:00:31.084 --rc geninfo_unexecuted_blocks=1 01:00:31.084 01:00:31.084 ' 01:00:31.084 05:59:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 01:00:31.084 05:59:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59198 01:00:31.084 05:59:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 01:00:31.084 05:59:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59198 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59198 ']' 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:31.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:31.084 05:59:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:00:31.343 [2024-12-09 05:59:25.688493] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:31.343 [2024-12-09 05:59:25.688684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59198 ] 01:00:31.343 [2024-12-09 05:59:25.839113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:31.343 [2024-12-09 05:59:25.878234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:31.602 [2024-12-09 05:59:25.933200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:32.183 05:59:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:32.183 05:59:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 01:00:32.183 05:59:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 01:00:32.183 { 01:00:32.183 "version": "SPDK v25.01-pre git sha1 15ce1ba92", 01:00:32.183 "fields": { 01:00:32.183 "major": 25, 01:00:32.183 "minor": 1, 01:00:32.183 "patch": 0, 01:00:32.183 "suffix": "-pre", 01:00:32.183 "commit": "15ce1ba92" 01:00:32.183 } 01:00:32.183 } 01:00:32.183 05:59:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 01:00:32.183 05:59:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 01:00:32.183 05:59:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 01:00:32.183 05:59:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 01:00:32.183 05:59:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 01:00:32.183 05:59:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:32.183 05:59:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 01:00:32.183 05:59:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:00:32.183 05:59:26 app_cmdline -- app/cmdline.sh@26 -- # sort 01:00:32.183 05:59:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:32.472 05:59:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 01:00:32.472 05:59:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 01:00:32.472 05:59:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:00:32.472 request: 01:00:32.472 { 01:00:32.472 "method": "env_dpdk_get_mem_stats", 01:00:32.472 "req_id": 1 01:00:32.472 } 01:00:32.472 Got JSON-RPC error response 01:00:32.472 response: 01:00:32.472 { 01:00:32.472 "code": -32601, 01:00:32.472 "message": "Method not found" 01:00:32.472 } 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:32.472 05:59:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59198 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59198 ']' 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59198 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:32.472 05:59:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59198 01:00:32.472 killing process with pid 59198 01:00:32.472 05:59:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:32.472 05:59:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:32.472 05:59:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59198' 01:00:32.472 05:59:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 59198 01:00:32.472 05:59:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 59198 01:00:33.054 01:00:33.054 real 0m1.943s 01:00:33.054 user 0m2.195s 01:00:33.054 sys 0m0.533s 01:00:33.054 05:59:27 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:33.054 ************************************ 01:00:33.055 END TEST app_cmdline 01:00:33.055 ************************************ 01:00:33.055 05:59:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:00:33.055 05:59:27 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:00:33.055 05:59:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:33.055 05:59:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:33.055 05:59:27 -- common/autotest_common.sh@10 -- # set +x 01:00:33.055 ************************************ 01:00:33.055 START TEST version 01:00:33.055 ************************************ 01:00:33.055 05:59:27 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:00:33.055 * Looking for test storage... 01:00:33.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:00:33.055 05:59:27 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:33.055 05:59:27 version -- common/autotest_common.sh@1711 -- # lcov --version 01:00:33.055 05:59:27 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:33.055 05:59:27 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:33.055 05:59:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:33.055 05:59:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:33.055 05:59:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:33.055 05:59:27 version -- scripts/common.sh@336 -- # IFS=.-: 01:00:33.055 05:59:27 version -- scripts/common.sh@336 -- # read -ra ver1 01:00:33.055 05:59:27 version -- scripts/common.sh@337 -- # IFS=.-: 01:00:33.055 05:59:27 version -- scripts/common.sh@337 -- # read -ra ver2 01:00:33.055 05:59:27 version -- scripts/common.sh@338 -- # local 'op=<' 01:00:33.055 05:59:27 version -- scripts/common.sh@340 -- # ver1_l=2 01:00:33.055 05:59:27 version -- scripts/common.sh@341 -- # ver2_l=1 01:00:33.055 05:59:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:33.055 05:59:27 version -- scripts/common.sh@344 -- # case "$op" in 01:00:33.055 05:59:27 version -- scripts/common.sh@345 -- # : 1 01:00:33.055 05:59:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:33.055 05:59:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:33.055 05:59:27 version -- scripts/common.sh@365 -- # decimal 1 01:00:33.055 05:59:27 version -- scripts/common.sh@353 -- # local d=1 01:00:33.055 05:59:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:33.055 05:59:27 version -- scripts/common.sh@355 -- # echo 1 01:00:33.055 05:59:27 version -- scripts/common.sh@365 -- # ver1[v]=1 01:00:33.315 05:59:27 version -- scripts/common.sh@366 -- # decimal 2 01:00:33.315 05:59:27 version -- scripts/common.sh@353 -- # local d=2 01:00:33.315 05:59:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:33.315 05:59:27 version -- scripts/common.sh@355 -- # echo 2 01:00:33.315 05:59:27 version -- scripts/common.sh@366 -- # ver2[v]=2 01:00:33.315 05:59:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:33.315 05:59:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:33.315 05:59:27 version -- scripts/common.sh@368 -- # return 0 01:00:33.315 05:59:27 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:33.315 05:59:27 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:33.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:33.315 --rc genhtml_branch_coverage=1 01:00:33.315 --rc genhtml_function_coverage=1 01:00:33.315 --rc genhtml_legend=1 01:00:33.315 --rc geninfo_all_blocks=1 01:00:33.315 --rc geninfo_unexecuted_blocks=1 01:00:33.315 01:00:33.315 ' 01:00:33.315 05:59:27 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:33.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:33.315 --rc genhtml_branch_coverage=1 01:00:33.315 --rc genhtml_function_coverage=1 01:00:33.315 --rc genhtml_legend=1 01:00:33.315 --rc geninfo_all_blocks=1 01:00:33.315 --rc geninfo_unexecuted_blocks=1 01:00:33.315 01:00:33.315 ' 01:00:33.315 05:59:27 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:33.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:33.315 --rc genhtml_branch_coverage=1 01:00:33.315 --rc genhtml_function_coverage=1 01:00:33.315 --rc genhtml_legend=1 01:00:33.315 --rc geninfo_all_blocks=1 01:00:33.315 --rc geninfo_unexecuted_blocks=1 01:00:33.315 01:00:33.315 ' 01:00:33.315 05:59:27 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:33.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:33.315 --rc genhtml_branch_coverage=1 01:00:33.315 --rc genhtml_function_coverage=1 01:00:33.315 --rc genhtml_legend=1 01:00:33.315 --rc geninfo_all_blocks=1 01:00:33.315 --rc geninfo_unexecuted_blocks=1 01:00:33.315 01:00:33.315 ' 01:00:33.315 05:59:27 version -- app/version.sh@17 -- # get_header_version major 01:00:33.315 05:59:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:00:33.315 05:59:27 version -- app/version.sh@14 -- # cut -f2 01:00:33.315 05:59:27 version -- app/version.sh@14 -- # tr -d '"' 01:00:33.315 05:59:27 version -- app/version.sh@17 -- # major=25 01:00:33.315 05:59:27 version -- app/version.sh@18 -- # get_header_version minor 01:00:33.315 05:59:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:00:33.315 05:59:27 version -- app/version.sh@14 -- # cut -f2 01:00:33.315 05:59:27 version -- app/version.sh@14 -- # tr -d '"' 01:00:33.315 05:59:27 version -- app/version.sh@18 -- # minor=1 01:00:33.315 05:59:27 version -- app/version.sh@19 -- # get_header_version patch 01:00:33.315 05:59:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:00:33.315 05:59:27 version -- app/version.sh@14 -- # cut -f2 01:00:33.315 05:59:27 version -- app/version.sh@14 -- # tr -d '"' 01:00:33.316 05:59:27 version -- app/version.sh@19 -- # patch=0 01:00:33.316 05:59:27 version -- app/version.sh@20 -- # get_header_version suffix 01:00:33.316 05:59:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:00:33.316 05:59:27 version -- app/version.sh@14 -- # cut -f2 01:00:33.316 05:59:27 version -- app/version.sh@14 -- # tr -d '"' 01:00:33.316 05:59:27 version -- app/version.sh@20 -- # suffix=-pre 01:00:33.316 05:59:27 version -- app/version.sh@22 -- # version=25.1 01:00:33.316 05:59:27 version -- app/version.sh@25 -- # (( patch != 0 )) 01:00:33.316 05:59:27 version -- app/version.sh@28 -- # version=25.1rc0 01:00:33.316 05:59:27 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:00:33.316 05:59:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 01:00:33.316 05:59:27 version -- app/version.sh@30 -- # py_version=25.1rc0 01:00:33.316 05:59:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 01:00:33.316 01:00:33.316 real 0m0.330s 01:00:33.316 user 0m0.197s 01:00:33.316 sys 0m0.188s 01:00:33.316 05:59:27 version -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:33.316 ************************************ 01:00:33.316 END TEST version 01:00:33.316 ************************************ 01:00:33.316 05:59:27 version -- common/autotest_common.sh@10 -- # set +x 01:00:33.316 05:59:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 01:00:33.316 05:59:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 01:00:33.316 05:59:27 -- spdk/autotest.sh@194 -- # uname -s 01:00:33.316 05:59:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 01:00:33.316 05:59:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:00:33.316 05:59:27 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 01:00:33.316 05:59:27 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 01:00:33.316 05:59:27 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 01:00:33.316 05:59:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:33.316 05:59:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:33.316 05:59:27 -- common/autotest_common.sh@10 -- # set +x 01:00:33.316 ************************************ 01:00:33.316 START TEST spdk_dd 01:00:33.316 ************************************ 01:00:33.316 05:59:27 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 01:00:33.576 * Looking for test storage... 01:00:33.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:00:33.576 05:59:27 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:33.576 05:59:27 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 01:00:33.576 05:59:27 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:33.576 05:59:28 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:33.576 05:59:28 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:33.576 05:59:28 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:33.576 05:59:28 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:33.576 05:59:28 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@345 -- # : 1 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@365 -- # decimal 1 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@353 -- # local d=1 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@355 -- # echo 1 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@366 -- # decimal 2 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@353 -- # local d=2 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@355 -- # echo 2 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@368 -- # return 0 01:00:33.577 05:59:28 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:33.577 05:59:28 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:33.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:33.577 --rc genhtml_branch_coverage=1 01:00:33.577 --rc genhtml_function_coverage=1 01:00:33.577 --rc genhtml_legend=1 01:00:33.577 --rc geninfo_all_blocks=1 01:00:33.577 --rc geninfo_unexecuted_blocks=1 01:00:33.577 01:00:33.577 ' 01:00:33.577 05:59:28 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:33.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:33.577 --rc genhtml_branch_coverage=1 01:00:33.577 --rc genhtml_function_coverage=1 01:00:33.577 --rc genhtml_legend=1 01:00:33.577 --rc geninfo_all_blocks=1 01:00:33.577 --rc geninfo_unexecuted_blocks=1 01:00:33.577 01:00:33.577 ' 01:00:33.577 05:59:28 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:33.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:33.577 --rc genhtml_branch_coverage=1 01:00:33.577 --rc genhtml_function_coverage=1 01:00:33.577 --rc genhtml_legend=1 01:00:33.577 --rc geninfo_all_blocks=1 01:00:33.577 --rc geninfo_unexecuted_blocks=1 01:00:33.577 01:00:33.577 ' 01:00:33.577 05:59:28 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:33.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:33.577 --rc genhtml_branch_coverage=1 01:00:33.577 --rc genhtml_function_coverage=1 01:00:33.577 --rc genhtml_legend=1 01:00:33.577 --rc geninfo_all_blocks=1 01:00:33.577 --rc geninfo_unexecuted_blocks=1 01:00:33.577 01:00:33.577 ' 01:00:33.577 05:59:28 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:33.577 05:59:28 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:33.577 05:59:28 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:33.577 05:59:28 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:33.577 05:59:28 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:33.577 05:59:28 spdk_dd -- paths/export.sh@5 -- # export PATH 01:00:33.577 05:59:28 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:33.577 05:59:28 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:00:34.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:00:34.167 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:00:34.167 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:00:34.167 05:59:28 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 01:00:34.167 05:59:28 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@313 -- # local nvmes 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@298 -- # local bdf= 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@233 -- # local class 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@234 -- # local subclass 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@235 -- # local progif 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@236 -- # class=01 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@237 -- # subclass=08 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@238 -- # progif=02 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@240 -- # hash lspci 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 01:00:34.167 05:59:28 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@18 -- # local i 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@27 -- # return 0 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@18 -- # local i 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@27 -- # return 0 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@323 -- # uname -s 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@323 -- # uname -s 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 01:00:34.427 05:59:28 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:00:34.427 05:59:28 spdk_dd -- dd/dd.sh@13 -- # check_liburing 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@139 -- # local lib 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 01:00:34.427 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 01:00:34.428 * spdk_dd linked to liburing 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 01:00:34.428 05:59:28 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 01:00:34.428 05:59:28 spdk_dd -- dd/common.sh@153 -- # return 0 01:00:34.428 05:59:28 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 01:00:34.428 05:59:28 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 01:00:34.428 05:59:28 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:00:34.428 05:59:28 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:34.428 05:59:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:00:34.428 ************************************ 01:00:34.428 START TEST spdk_dd_basic_rw 01:00:34.428 ************************************ 01:00:34.428 05:59:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 01:00:34.428 * Looking for test storage... 01:00:34.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:00:34.428 05:59:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:34.428 05:59:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 01:00:34.428 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:34.687 --rc genhtml_branch_coverage=1 01:00:34.687 --rc genhtml_function_coverage=1 01:00:34.687 --rc genhtml_legend=1 01:00:34.687 --rc geninfo_all_blocks=1 01:00:34.687 --rc geninfo_unexecuted_blocks=1 01:00:34.687 01:00:34.687 ' 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:34.687 --rc genhtml_branch_coverage=1 01:00:34.687 --rc genhtml_function_coverage=1 01:00:34.687 --rc genhtml_legend=1 01:00:34.687 --rc geninfo_all_blocks=1 01:00:34.687 --rc geninfo_unexecuted_blocks=1 01:00:34.687 01:00:34.687 ' 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:34.687 --rc genhtml_branch_coverage=1 01:00:34.687 --rc genhtml_function_coverage=1 01:00:34.687 --rc genhtml_legend=1 01:00:34.687 --rc geninfo_all_blocks=1 01:00:34.687 --rc geninfo_unexecuted_blocks=1 01:00:34.687 01:00:34.687 ' 01:00:34.687 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:34.687 --rc genhtml_branch_coverage=1 01:00:34.687 --rc genhtml_function_coverage=1 01:00:34.687 --rc genhtml_legend=1 01:00:34.688 --rc geninfo_all_blocks=1 01:00:34.688 --rc geninfo_unexecuted_blocks=1 01:00:34.688 01:00:34.688 ' 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 01:00:34.688 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:00:34.948 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:00:34.948 ************************************ 01:00:34.948 START TEST dd_bs_lt_native_bs 01:00:34.949 ************************************ 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:00:34.949 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:00:34.949 { 01:00:34.949 "subsystems": [ 01:00:34.949 { 01:00:34.949 "subsystem": "bdev", 01:00:34.949 "config": [ 01:00:34.949 { 01:00:34.949 "params": { 01:00:34.949 "trtype": "pcie", 01:00:34.949 "traddr": "0000:00:10.0", 01:00:34.949 "name": "Nvme0" 01:00:34.949 }, 01:00:34.949 "method": "bdev_nvme_attach_controller" 01:00:34.949 }, 01:00:34.949 { 01:00:34.949 "method": "bdev_wait_for_examine" 01:00:34.949 } 01:00:34.949 ] 01:00:34.949 } 01:00:34.949 ] 01:00:34.949 } 01:00:34.949 [2024-12-09 05:59:29.414062] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:34.949 [2024-12-09 05:59:29.414315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59555 ] 01:00:35.207 [2024-12-09 05:59:29.563093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:35.207 [2024-12-09 05:59:29.612471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:35.207 [2024-12-09 05:59:29.659834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:35.207 [2024-12-09 05:59:29.761623] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 01:00:35.207 [2024-12-09 05:59:29.761669] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:00:35.465 [2024-12-09 05:59:29.864327] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:35.465 01:00:35.465 real 0m0.569s 01:00:35.465 user 0m0.351s 01:00:35.465 sys 0m0.166s 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:35.465 ************************************ 01:00:35.465 END TEST dd_bs_lt_native_bs 01:00:35.465 ************************************ 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:35.465 05:59:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:00:35.465 ************************************ 01:00:35.465 START TEST dd_rw 01:00:35.465 ************************************ 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:00:35.465 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:36.033 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 01:00:36.033 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:00:36.033 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:36.033 05:59:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:36.033 [2024-12-09 05:59:30.525810] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:36.033 [2024-12-09 05:59:30.526019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59586 ] 01:00:36.033 { 01:00:36.033 "subsystems": [ 01:00:36.033 { 01:00:36.033 "subsystem": "bdev", 01:00:36.033 "config": [ 01:00:36.033 { 01:00:36.033 "params": { 01:00:36.033 "trtype": "pcie", 01:00:36.033 "traddr": "0000:00:10.0", 01:00:36.033 "name": "Nvme0" 01:00:36.033 }, 01:00:36.033 "method": "bdev_nvme_attach_controller" 01:00:36.033 }, 01:00:36.033 { 01:00:36.033 "method": "bdev_wait_for_examine" 01:00:36.033 } 01:00:36.033 ] 01:00:36.033 } 01:00:36.033 ] 01:00:36.033 } 01:00:36.293 [2024-12-09 05:59:30.677304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:36.293 [2024-12-09 05:59:30.721105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:36.293 [2024-12-09 05:59:30.765130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:36.293  [2024-12-09T05:59:31.139Z] Copying: 60/60 [kB] (average 14 MBps) 01:00:36.552 01:00:36.552 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 01:00:36.552 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:00:36.552 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:36.552 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:36.552 { 01:00:36.552 "subsystems": [ 01:00:36.552 { 01:00:36.552 "subsystem": "bdev", 01:00:36.552 "config": [ 01:00:36.552 { 01:00:36.552 "params": { 01:00:36.552 "trtype": "pcie", 01:00:36.552 "traddr": "0000:00:10.0", 01:00:36.552 "name": "Nvme0" 01:00:36.552 }, 01:00:36.552 "method": "bdev_nvme_attach_controller" 01:00:36.552 }, 01:00:36.552 { 01:00:36.552 "method": "bdev_wait_for_examine" 01:00:36.552 } 01:00:36.552 ] 01:00:36.552 } 01:00:36.552 ] 01:00:36.552 } 01:00:36.552 [2024-12-09 05:59:31.082322] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:36.552 [2024-12-09 05:59:31.082389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59594 ] 01:00:36.810 [2024-12-09 05:59:31.234491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:36.810 [2024-12-09 05:59:31.280530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:36.811 [2024-12-09 05:59:31.328679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:37.069  [2024-12-09T05:59:31.656Z] Copying: 60/60 [kB] (average 14 MBps) 01:00:37.069 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:37.069 05:59:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:37.069 [2024-12-09 05:59:31.649710] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:37.069 [2024-12-09 05:59:31.649769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59615 ] 01:00:37.069 { 01:00:37.069 "subsystems": [ 01:00:37.069 { 01:00:37.069 "subsystem": "bdev", 01:00:37.069 "config": [ 01:00:37.069 { 01:00:37.069 "params": { 01:00:37.069 "trtype": "pcie", 01:00:37.069 "traddr": "0000:00:10.0", 01:00:37.069 "name": "Nvme0" 01:00:37.069 }, 01:00:37.069 "method": "bdev_nvme_attach_controller" 01:00:37.069 }, 01:00:37.069 { 01:00:37.069 "method": "bdev_wait_for_examine" 01:00:37.069 } 01:00:37.069 ] 01:00:37.069 } 01:00:37.069 ] 01:00:37.069 } 01:00:37.327 [2024-12-09 05:59:31.801223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:37.327 [2024-12-09 05:59:31.841890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:37.327 [2024-12-09 05:59:31.885865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:37.586  [2024-12-09T05:59:32.173Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:00:37.586 01:00:37.586 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:00:37.586 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 01:00:37.586 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 01:00:37.586 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 01:00:37.586 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 01:00:37.586 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:00:37.586 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:38.153 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 01:00:38.153 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:00:38.153 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:38.153 05:59:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:38.153 [2024-12-09 05:59:32.666551] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:38.153 [2024-12-09 05:59:32.666780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59634 ] 01:00:38.153 { 01:00:38.153 "subsystems": [ 01:00:38.153 { 01:00:38.153 "subsystem": "bdev", 01:00:38.153 "config": [ 01:00:38.153 { 01:00:38.153 "params": { 01:00:38.153 "trtype": "pcie", 01:00:38.153 "traddr": "0000:00:10.0", 01:00:38.153 "name": "Nvme0" 01:00:38.153 }, 01:00:38.153 "method": "bdev_nvme_attach_controller" 01:00:38.153 }, 01:00:38.153 { 01:00:38.153 "method": "bdev_wait_for_examine" 01:00:38.153 } 01:00:38.153 ] 01:00:38.153 } 01:00:38.153 ] 01:00:38.153 } 01:00:38.412 [2024-12-09 05:59:32.816353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:38.412 [2024-12-09 05:59:32.860971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:38.412 [2024-12-09 05:59:32.904991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:38.672  [2024-12-09T05:59:33.259Z] Copying: 60/60 [kB] (average 58 MBps) 01:00:38.672 01:00:38.672 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 01:00:38.672 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:00:38.672 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:38.672 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:38.672 [2024-12-09 05:59:33.220603] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:38.672 [2024-12-09 05:59:33.220671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59642 ] 01:00:38.672 { 01:00:38.672 "subsystems": [ 01:00:38.672 { 01:00:38.672 "subsystem": "bdev", 01:00:38.672 "config": [ 01:00:38.672 { 01:00:38.672 "params": { 01:00:38.672 "trtype": "pcie", 01:00:38.672 "traddr": "0000:00:10.0", 01:00:38.672 "name": "Nvme0" 01:00:38.672 }, 01:00:38.672 "method": "bdev_nvme_attach_controller" 01:00:38.672 }, 01:00:38.672 { 01:00:38.672 "method": "bdev_wait_for_examine" 01:00:38.672 } 01:00:38.672 ] 01:00:38.672 } 01:00:38.672 ] 01:00:38.672 } 01:00:38.930 [2024-12-09 05:59:33.370316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:38.930 [2024-12-09 05:59:33.416972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:38.930 [2024-12-09 05:59:33.463597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:39.188  [2024-12-09T05:59:33.775Z] Copying: 60/60 [kB] (average 29 MBps) 01:00:39.188 01:00:39.188 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:39.188 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 01:00:39.188 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:00:39.188 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:00:39.188 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 01:00:39.189 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:00:39.189 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:00:39.189 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:00:39.189 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:00:39.189 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:39.189 05:59:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:39.448 [2024-12-09 05:59:33.783758] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:39.448 [2024-12-09 05:59:33.783840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 01:00:39.448 { 01:00:39.448 "subsystems": [ 01:00:39.448 { 01:00:39.448 "subsystem": "bdev", 01:00:39.448 "config": [ 01:00:39.448 { 01:00:39.448 "params": { 01:00:39.448 "trtype": "pcie", 01:00:39.448 "traddr": "0000:00:10.0", 01:00:39.448 "name": "Nvme0" 01:00:39.448 }, 01:00:39.448 "method": "bdev_nvme_attach_controller" 01:00:39.448 }, 01:00:39.448 { 01:00:39.448 "method": "bdev_wait_for_examine" 01:00:39.448 } 01:00:39.448 ] 01:00:39.448 } 01:00:39.448 ] 01:00:39.448 } 01:00:39.448 [2024-12-09 05:59:33.935890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:39.448 [2024-12-09 05:59:33.978017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:39.448 [2024-12-09 05:59:34.021119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:39.707  [2024-12-09T05:59:34.294Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:00:39.707 01:00:39.707 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 01:00:39.707 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:00:39.707 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 01:00:39.707 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 01:00:39.707 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 01:00:39.707 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 01:00:39.707 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:00:39.707 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:40.276 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 01:00:40.276 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:00:40.276 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:40.276 05:59:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:40.276 [2024-12-09 05:59:34.772367] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:40.276 [2024-12-09 05:59:34.772607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 01:00:40.276 { 01:00:40.276 "subsystems": [ 01:00:40.276 { 01:00:40.276 "subsystem": "bdev", 01:00:40.276 "config": [ 01:00:40.276 { 01:00:40.276 "params": { 01:00:40.276 "trtype": "pcie", 01:00:40.276 "traddr": "0000:00:10.0", 01:00:40.276 "name": "Nvme0" 01:00:40.276 }, 01:00:40.276 "method": "bdev_nvme_attach_controller" 01:00:40.276 }, 01:00:40.276 { 01:00:40.276 "method": "bdev_wait_for_examine" 01:00:40.276 } 01:00:40.276 ] 01:00:40.276 } 01:00:40.276 ] 01:00:40.276 } 01:00:40.535 [2024-12-09 05:59:34.922500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:40.535 [2024-12-09 05:59:34.964179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:40.535 [2024-12-09 05:59:35.007994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:40.535  [2024-12-09T05:59:35.382Z] Copying: 56/56 [kB] (average 27 MBps) 01:00:40.795 01:00:40.795 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 01:00:40.795 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:00:40.795 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:40.795 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:40.795 [2024-12-09 05:59:35.322227] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:40.795 [2024-12-09 05:59:35.322293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 01:00:40.795 { 01:00:40.795 "subsystems": [ 01:00:40.795 { 01:00:40.795 "subsystem": "bdev", 01:00:40.795 "config": [ 01:00:40.795 { 01:00:40.795 "params": { 01:00:40.795 "trtype": "pcie", 01:00:40.795 "traddr": "0000:00:10.0", 01:00:40.795 "name": "Nvme0" 01:00:40.795 }, 01:00:40.795 "method": "bdev_nvme_attach_controller" 01:00:40.795 }, 01:00:40.795 { 01:00:40.795 "method": "bdev_wait_for_examine" 01:00:40.795 } 01:00:40.795 ] 01:00:40.795 } 01:00:40.795 ] 01:00:40.795 } 01:00:41.053 [2024-12-09 05:59:35.474292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:41.053 [2024-12-09 05:59:35.517527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:41.053 [2024-12-09 05:59:35.561549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:41.311  [2024-12-09T05:59:35.898Z] Copying: 56/56 [kB] (average 27 MBps) 01:00:41.311 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:41.311 05:59:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:41.311 [2024-12-09 05:59:35.882259] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:41.311 [2024-12-09 05:59:35.882341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59711 ] 01:00:41.311 { 01:00:41.311 "subsystems": [ 01:00:41.311 { 01:00:41.311 "subsystem": "bdev", 01:00:41.311 "config": [ 01:00:41.311 { 01:00:41.311 "params": { 01:00:41.311 "trtype": "pcie", 01:00:41.311 "traddr": "0000:00:10.0", 01:00:41.311 "name": "Nvme0" 01:00:41.311 }, 01:00:41.311 "method": "bdev_nvme_attach_controller" 01:00:41.311 }, 01:00:41.311 { 01:00:41.311 "method": "bdev_wait_for_examine" 01:00:41.311 } 01:00:41.311 ] 01:00:41.311 } 01:00:41.311 ] 01:00:41.311 } 01:00:41.570 [2024-12-09 05:59:36.033490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:41.570 [2024-12-09 05:59:36.076626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:41.570 [2024-12-09 05:59:36.120517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:41.829  [2024-12-09T05:59:36.416Z] Copying: 1024/1024 [kB] (average 500 MBps) 01:00:41.829 01:00:41.829 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:00:41.829 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 01:00:41.829 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 01:00:41.829 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 01:00:41.829 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 01:00:41.829 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:00:41.829 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:42.397 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 01:00:42.397 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:00:42.397 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:42.397 05:59:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:42.397 [2024-12-09 05:59:36.870830] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:42.397 [2024-12-09 05:59:36.870899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59730 ] 01:00:42.397 { 01:00:42.397 "subsystems": [ 01:00:42.397 { 01:00:42.397 "subsystem": "bdev", 01:00:42.397 "config": [ 01:00:42.397 { 01:00:42.397 "params": { 01:00:42.397 "trtype": "pcie", 01:00:42.397 "traddr": "0000:00:10.0", 01:00:42.397 "name": "Nvme0" 01:00:42.397 }, 01:00:42.397 "method": "bdev_nvme_attach_controller" 01:00:42.397 }, 01:00:42.397 { 01:00:42.397 "method": "bdev_wait_for_examine" 01:00:42.397 } 01:00:42.397 ] 01:00:42.397 } 01:00:42.397 ] 01:00:42.397 } 01:00:42.656 [2024-12-09 05:59:37.021827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:42.656 [2024-12-09 05:59:37.064719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:42.656 [2024-12-09 05:59:37.107964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:42.656  [2024-12-09T05:59:37.502Z] Copying: 56/56 [kB] (average 54 MBps) 01:00:42.915 01:00:42.915 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:00:42.915 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:42.915 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 01:00:42.915 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:42.915 { 01:00:42.915 "subsystems": [ 01:00:42.915 { 01:00:42.915 "subsystem": "bdev", 01:00:42.915 "config": [ 01:00:42.915 { 01:00:42.915 "params": { 01:00:42.915 "trtype": "pcie", 01:00:42.915 "traddr": "0000:00:10.0", 01:00:42.915 "name": "Nvme0" 01:00:42.915 }, 01:00:42.915 "method": "bdev_nvme_attach_controller" 01:00:42.915 }, 01:00:42.915 { 01:00:42.915 "method": "bdev_wait_for_examine" 01:00:42.915 } 01:00:42.915 ] 01:00:42.915 } 01:00:42.915 ] 01:00:42.915 } 01:00:42.915 [2024-12-09 05:59:37.417219] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:42.915 [2024-12-09 05:59:37.417281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 01:00:43.175 [2024-12-09 05:59:37.569863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:43.175 [2024-12-09 05:59:37.610803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:43.175 [2024-12-09 05:59:37.654291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:43.175  [2024-12-09T05:59:38.022Z] Copying: 56/56 [kB] (average 54 MBps) 01:00:43.435 01:00:43.435 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:43.435 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 01:00:43.435 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:00:43.435 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:00:43.435 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 01:00:43.435 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:00:43.435 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:00:43.435 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:00:43.435 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:00:43.436 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:43.436 05:59:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:43.436 [2024-12-09 05:59:37.972850] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:43.436 [2024-12-09 05:59:37.973048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59759 ] 01:00:43.436 { 01:00:43.436 "subsystems": [ 01:00:43.436 { 01:00:43.436 "subsystem": "bdev", 01:00:43.436 "config": [ 01:00:43.436 { 01:00:43.436 "params": { 01:00:43.436 "trtype": "pcie", 01:00:43.436 "traddr": "0000:00:10.0", 01:00:43.436 "name": "Nvme0" 01:00:43.436 }, 01:00:43.436 "method": "bdev_nvme_attach_controller" 01:00:43.436 }, 01:00:43.436 { 01:00:43.436 "method": "bdev_wait_for_examine" 01:00:43.436 } 01:00:43.436 ] 01:00:43.436 } 01:00:43.436 ] 01:00:43.436 } 01:00:43.696 [2024-12-09 05:59:38.121009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:43.696 [2024-12-09 05:59:38.165472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:43.696 [2024-12-09 05:59:38.209993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:43.955  [2024-12-09T05:59:38.542Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:00:43.955 01:00:43.955 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 01:00:43.955 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:00:43.955 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 01:00:43.955 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 01:00:43.955 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 01:00:43.955 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 01:00:43.955 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:00:43.955 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:44.524 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 01:00:44.524 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:00:44.524 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:44.524 05:59:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:44.524 [2024-12-09 05:59:38.902078] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:44.524 [2024-12-09 05:59:38.902163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59778 ] 01:00:44.524 { 01:00:44.524 "subsystems": [ 01:00:44.524 { 01:00:44.524 "subsystem": "bdev", 01:00:44.524 "config": [ 01:00:44.524 { 01:00:44.524 "params": { 01:00:44.524 "trtype": "pcie", 01:00:44.524 "traddr": "0000:00:10.0", 01:00:44.524 "name": "Nvme0" 01:00:44.524 }, 01:00:44.524 "method": "bdev_nvme_attach_controller" 01:00:44.524 }, 01:00:44.524 { 01:00:44.524 "method": "bdev_wait_for_examine" 01:00:44.524 } 01:00:44.524 ] 01:00:44.524 } 01:00:44.524 ] 01:00:44.524 } 01:00:44.524 [2024-12-09 05:59:39.051601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:44.524 [2024-12-09 05:59:39.096660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:44.784 [2024-12-09 05:59:39.141208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:44.784  [2024-12-09T05:59:39.631Z] Copying: 48/48 [kB] (average 46 MBps) 01:00:45.044 01:00:45.044 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 01:00:45.044 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:00:45.044 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:45.044 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:45.044 [2024-12-09 05:59:39.456986] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:45.044 [2024-12-09 05:59:39.457055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59792 ] 01:00:45.044 { 01:00:45.044 "subsystems": [ 01:00:45.044 { 01:00:45.044 "subsystem": "bdev", 01:00:45.044 "config": [ 01:00:45.044 { 01:00:45.044 "params": { 01:00:45.044 "trtype": "pcie", 01:00:45.044 "traddr": "0000:00:10.0", 01:00:45.044 "name": "Nvme0" 01:00:45.044 }, 01:00:45.044 "method": "bdev_nvme_attach_controller" 01:00:45.044 }, 01:00:45.044 { 01:00:45.044 "method": "bdev_wait_for_examine" 01:00:45.044 } 01:00:45.044 ] 01:00:45.044 } 01:00:45.044 ] 01:00:45.044 } 01:00:45.044 [2024-12-09 05:59:39.606570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:45.303 [2024-12-09 05:59:39.649799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:45.303 [2024-12-09 05:59:39.693410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:45.303  [2024-12-09T05:59:40.150Z] Copying: 48/48 [kB] (average 23 MBps) 01:00:45.563 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:45.563 05:59:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:45.563 [2024-12-09 05:59:40.015024] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:45.563 [2024-12-09 05:59:40.015231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59807 ] 01:00:45.563 { 01:00:45.563 "subsystems": [ 01:00:45.563 { 01:00:45.563 "subsystem": "bdev", 01:00:45.563 "config": [ 01:00:45.563 { 01:00:45.564 "params": { 01:00:45.564 "trtype": "pcie", 01:00:45.564 "traddr": "0000:00:10.0", 01:00:45.564 "name": "Nvme0" 01:00:45.564 }, 01:00:45.564 "method": "bdev_nvme_attach_controller" 01:00:45.564 }, 01:00:45.564 { 01:00:45.564 "method": "bdev_wait_for_examine" 01:00:45.564 } 01:00:45.564 ] 01:00:45.564 } 01:00:45.564 ] 01:00:45.564 } 01:00:45.823 [2024-12-09 05:59:40.165004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:45.823 [2024-12-09 05:59:40.207254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:45.823 [2024-12-09 05:59:40.249871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:45.823  [2024-12-09T05:59:40.668Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:00:46.081 01:00:46.081 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:00:46.081 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 01:00:46.081 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 01:00:46.081 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 01:00:46.081 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 01:00:46.081 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:00:46.081 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:46.339 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 01:00:46.339 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:00:46.339 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:46.339 05:59:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:46.598 [2024-12-09 05:59:40.939391] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:46.598 [2024-12-09 05:59:40.939472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59825 ] 01:00:46.598 { 01:00:46.598 "subsystems": [ 01:00:46.598 { 01:00:46.598 "subsystem": "bdev", 01:00:46.598 "config": [ 01:00:46.598 { 01:00:46.598 "params": { 01:00:46.598 "trtype": "pcie", 01:00:46.598 "traddr": "0000:00:10.0", 01:00:46.598 "name": "Nvme0" 01:00:46.598 }, 01:00:46.598 "method": "bdev_nvme_attach_controller" 01:00:46.598 }, 01:00:46.598 { 01:00:46.598 "method": "bdev_wait_for_examine" 01:00:46.598 } 01:00:46.598 ] 01:00:46.598 } 01:00:46.598 ] 01:00:46.598 } 01:00:46.598 [2024-12-09 05:59:41.087894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:46.598 [2024-12-09 05:59:41.131500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:46.598 [2024-12-09 05:59:41.175065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:46.856  [2024-12-09T05:59:41.443Z] Copying: 48/48 [kB] (average 46 MBps) 01:00:46.856 01:00:46.856 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 01:00:46.856 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:00:46.856 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:46.856 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:47.115 { 01:00:47.115 "subsystems": [ 01:00:47.115 { 01:00:47.115 "subsystem": "bdev", 01:00:47.115 "config": [ 01:00:47.115 { 01:00:47.115 "params": { 01:00:47.115 "trtype": "pcie", 01:00:47.115 "traddr": "0000:00:10.0", 01:00:47.115 "name": "Nvme0" 01:00:47.115 }, 01:00:47.115 "method": "bdev_nvme_attach_controller" 01:00:47.115 }, 01:00:47.115 { 01:00:47.115 "method": "bdev_wait_for_examine" 01:00:47.115 } 01:00:47.115 ] 01:00:47.115 } 01:00:47.115 ] 01:00:47.115 } 01:00:47.115 [2024-12-09 05:59:41.485745] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:47.115 [2024-12-09 05:59:41.485815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59836 ] 01:00:47.115 [2024-12-09 05:59:41.634551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:47.115 [2024-12-09 05:59:41.678126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:47.373 [2024-12-09 05:59:41.721857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:47.373  [2024-12-09T05:59:42.219Z] Copying: 48/48 [kB] (average 46 MBps) 01:00:47.632 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:47.632 05:59:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:47.632 [2024-12-09 05:59:42.039794] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:47.632 [2024-12-09 05:59:42.039861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59855 ] 01:00:47.632 { 01:00:47.632 "subsystems": [ 01:00:47.632 { 01:00:47.632 "subsystem": "bdev", 01:00:47.632 "config": [ 01:00:47.632 { 01:00:47.632 "params": { 01:00:47.632 "trtype": "pcie", 01:00:47.632 "traddr": "0000:00:10.0", 01:00:47.632 "name": "Nvme0" 01:00:47.632 }, 01:00:47.632 "method": "bdev_nvme_attach_controller" 01:00:47.632 }, 01:00:47.632 { 01:00:47.632 "method": "bdev_wait_for_examine" 01:00:47.632 } 01:00:47.632 ] 01:00:47.632 } 01:00:47.632 ] 01:00:47.632 } 01:00:47.632 [2024-12-09 05:59:42.190468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:47.891 [2024-12-09 05:59:42.232119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:47.891 [2024-12-09 05:59:42.276029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:47.891  [2024-12-09T05:59:42.738Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:00:48.151 01:00:48.151 01:00:48.151 real 0m12.535s 01:00:48.151 user 0m8.654s 01:00:48.151 sys 0m5.003s 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:00:48.151 ************************************ 01:00:48.151 END TEST dd_rw 01:00:48.151 ************************************ 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:00:48.151 ************************************ 01:00:48.151 START TEST dd_rw_offset 01:00:48.151 ************************************ 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=3qtbfwjwssv7orplpvtpwtdgkmw6r6t78ptm18g7mfsml749iyvw6mf8wngl1r64mqw21zsa08tc380b5zyvlj0ubs6r9ju7dutpb5gg50p7c4sjkvy5rntg8gfafh551m7op9glx1ml3i699dy0eciii2dz0ye8vt024huzs6n8vb3zc5o8vm193mmqudhr2ltiobxxxcfb6zg2iic44ohk4rynvccjh1ooljxu66j6snd3033ufx5665d27lv438ertgbd2gs667o300p9qq5kxj0wkdtbtx8wguw5jws9ppbydslltmnmvun1wpuytc44up8b8vne2gy6ef8b0oyzxw9uc9h332in30wfynzetidw4ppbcay1wfegp0a7tsiq8zw8pczxt2lemqvbwkfomckwsok0f3pgmuqt2m11rspwhzk64fi24wgdcfi0o109858hxsbtcfxzbkr2f9wo7x1byfvcuj8x4ukdc7d2klihhmui43assm7vfihuwriqdww7gn6df3cg7yu94h4z1bk0oa30amr3749ue98jz1hfh1m877xjseqex34zsot7fwxbho6db0busxyn1irzj9okmjee41zilqq8s0fopoj0my2e5d801wfiortbe89fm21y7p385adicmte3ohgtvc321ludzna4lkw6n4tko4vqt2jp8667kkx4anfjuunqlbuzbms4l9huegarc2u1e84a1f3hqtr268gnxbrb4w61eri2bmyood5wey3p4u1oer08sod8nh1is8ejznuotfcg82lprfc1f7uq0dmqko6nbkx7d6ntr6gipk7yb8olx56vq131hgat5ay9q2u2sjhtyluiz94k2pjwp2jf88ft5frtmng9871jlqcbcgml0x7o5xkqr1sifyc3l0gmoxwpm1kt4d3zqiq2ndy2xifnp055hlvopwe7il631xm67plevotwz9hcuuwae4w8jk4zb5pbtufs6jcavzlikl0n9diuvezqhxtbu7s1hjqatutj0519pb6w8yr3x8o24x8ttp2ihbjqr96xs6wlscs52io21q7dgj70iqpumam0fjz1w1tm0lnesplz2ycasp1krpz6j3jqugpqlyy9u2jryiedc6kbpaec4elwjj1vfri1wr8k967v98ohhjngzu41lkgeptfqau7wtxvopw118nl5x7jn4irft5qyp6dotn2yuci5k7zbtike973lyfuwtqj11sjgppzhx6ipmoj2zavhe8ojjjl2ysjyb2hh0kxul1hqfltcuftpwaqjff9xa3sadjpqf9x2vd4nhacbvgpdthmbugll4fnvxd18hj5f6s70b6ingf1tzyousfaw0sd5lscb2rrs0yupbq8oaq3dgmlwf9ywiv8n1txcrt4pubmflw4obvgfp2h7ftygaf5xbkofd0e7ie21r31ma76yfbi7l1zqv5y25mkilf2fuzer1qvsno015bn1zshb74xs33ojplwztz0bdtw5jhgnm1o91aqngc6vvzgul8x510hdvcstixco9gxf5gs84qow7iqa3huchcbilogww5pgi4y3w1dm65218dnc47qtacbrljlyogd8h2zw336l08nmw0uulroaflv7hxx92nsg32r49i2njs64ze4tnq7gl2egdeh61j0go1u9lnxpj7bezwz3pyq6pw7t12v6kkgq2builkmjumczydw3tabdn7iwo8ppljv8dnem62m9tgfa32c1n0anldqaujqlphge8ck53f3ksg5vem1w1oh5i4c8tc0kiatfqe8cactj3p0uiu584htlmu43qhu458b8hid52mdolzh99joatm0y1ale6pz0s4qe62bs6jr9jh7jt9gmgakrkltred6a7p4dkgby5pawon0jkvcc4tbni4f6cnf66et63rzndx9okydhlu2vrcfn1bpy7avsyuxu15yajuto70dyuf6f5p4m4jotcd7l1q6ef6kkj39h0lxbzprvw6sumrxutpwy82o0y0r8sj8hm70ouf3h1mcsgmiu4u1525rlwzgsgdw1jlfpf662od5h3fimxnazsc358ujglrtakbsetqpvfmi9yia7awcev9mdrfltlhyo64aic2qgxb7ohj4hu4tc54evk7c78p4rz3lv4v6005d2yv81fd59je0watjas9q5gwo1rz3laf3t2sg0upwyfxnxfpq4h7qelw4k864u0foxuqd8a0fatncimkdsg8zhcrw62di6tpal6akmf6zvcd9dm9wa6z077wf05lgmdbjqe1b006h4vybi025e1zxq3nye32ium8ltx6lviheozhmthtilutgdwhel0fvqzx89wd8x6u8eir9koeouvw6c3lbydpt7oq23vvk8z3wtielb4n25b41w1f0gqdxvon66ed293mmc20td6nkly5bt9i89dc2qphwr9f9voyyrgexuk1jhqp7n8ktkax9ss03a20wet3lx09x8z9eu0uu78llypcvnz23x2oalpgspqpmy4j3dd7yrwpfaa81h4dsy4tat451es6b7jb7ssvp7gnh7we1rfu2x6bix69olivjx7zfgi2ui4u9bekmoetq72ijjiz3l9ag34d9lefzwt9okkha025y4obb6tx5j4tqzfx36yu74h82mmfge632mkzmm7uskjf2k2tvpbuesbulpu7anjg47j209pqj1xrvauwq4l2d4orp5fxv06bfuerabe60gzzrny7xtmz7hrah55cgl2lr15xqagi4bsj92v3ce1h5wkqd45yp5snq5s6rj8cqr93akr4gtmmmj1i376adoc91bxe8ke8lokxmtch2kkiugg0tk1v8oki5w5ukaili3ytv029bnpc61iyrlnf8xjg36pmk0e3xtl6mbzgskutj6akcy1f0qaks72bwdb1dxuiqrmpgdkkjt4fus503behdxsz79pyyd7udwa80uw4twc0vlkwt9rzn9eb8a4902wvewp0cmbdxhg9rk9hjg82so6my8ixgnfldb1m55i8brqfhz7rodvshappxksxhpfvo3hjmmwrgzztdhh4kvsgr27cmzi4irv7yqa9x74sooisfe7op2t0gntg90jsj7yskxax6fj7hsk2zn8e0x23snocmdb3jipsrzunjprhk3gjxrydda830dkjery37juyei4nb1un7rczrulm8vp8d0qo9gezyf0tdbp6l3uibyh6zqgw54qaqirwt4qopdxv4u996xlf6xolk9xu770wyg0tf2cewzcv0m6rj3np235udxid1tietcd4i5lkiv369jvx32qcgc5n516jgse3t2uzzb3jku2gzhwpjcjvhxoe1od2fvsym4432ca40s7bnyn3djo6mu68zrzg2wmn9ir0m3q0wg5kpvl1rqzmuzeb6ud3d4j91pn3m2lumylqff5gt34q9jf1hp1fzz2aoj6t6crumv8njhrqv58h12cgygkkawe5c9jd1cq6nwb6btbs8ymwhwsoxk0qwvrwdg4kkf00gxvsr795dbacocux8zsaq4hncnqidloexmxdnp8ksug2zatgm79rvvg1k90taf9kgvr5hrb8dx8ui7zd767tp7yh1tdjc0k61uqbayihl5sx88644psckfh5djgxcj3om6een0jaw65bzuwf5u1pftrtsyjfnk02imtw95fqjjtde21gnpt1axh9ueejga17xxf5eta4or3dn5wxagt3myf11938emsh38bu1ixqihlakolk0g7xyzk1vfv35cip6q7vvtlaxbwgg7ylkiwb2irntupnydezidf7momww8e6e7ycl8l6j124ebjzqbpuy23nzyy3w95iviixo315drj88yb65x77x8v0stofvmabq0p6g8od3b0puwrmhi3fter7squhjtavg83g1ahh1pnxqv4zn00e38ubapkh0n30g38q61yv4aj2gv00sp1i1oem4jzeg4cjfccooyte40fzk06v8x0s73ga58qyqzmjshr38nruy8ezrtn28y95tpxmfw8586tuslfel 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 01:00:48.151 05:59:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:00:48.151 [2024-12-09 05:59:42.711749] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:48.151 [2024-12-09 05:59:42.711840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59886 ] 01:00:48.151 { 01:00:48.151 "subsystems": [ 01:00:48.151 { 01:00:48.151 "subsystem": "bdev", 01:00:48.151 "config": [ 01:00:48.151 { 01:00:48.151 "params": { 01:00:48.151 "trtype": "pcie", 01:00:48.151 "traddr": "0000:00:10.0", 01:00:48.151 "name": "Nvme0" 01:00:48.151 }, 01:00:48.151 "method": "bdev_nvme_attach_controller" 01:00:48.151 }, 01:00:48.151 { 01:00:48.151 "method": "bdev_wait_for_examine" 01:00:48.151 } 01:00:48.151 ] 01:00:48.151 } 01:00:48.151 ] 01:00:48.151 } 01:00:48.410 [2024-12-09 05:59:42.861296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:48.410 [2024-12-09 05:59:42.909865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:48.410 [2024-12-09 05:59:42.957165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:48.668  [2024-12-09T05:59:43.255Z] Copying: 4096/4096 [B] (average 4000 kBps) 01:00:48.668 01:00:48.668 05:59:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 01:00:48.668 05:59:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 01:00:48.668 05:59:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 01:00:48.668 05:59:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:00:48.926 { 01:00:48.926 "subsystems": [ 01:00:48.926 { 01:00:48.926 "subsystem": "bdev", 01:00:48.926 "config": [ 01:00:48.926 { 01:00:48.926 "params": { 01:00:48.926 "trtype": "pcie", 01:00:48.926 "traddr": "0000:00:10.0", 01:00:48.926 "name": "Nvme0" 01:00:48.926 }, 01:00:48.926 "method": "bdev_nvme_attach_controller" 01:00:48.926 }, 01:00:48.926 { 01:00:48.926 "method": "bdev_wait_for_examine" 01:00:48.926 } 01:00:48.926 ] 01:00:48.926 } 01:00:48.926 ] 01:00:48.926 } 01:00:48.926 [2024-12-09 05:59:43.270942] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:48.926 [2024-12-09 05:59:43.271008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59899 ] 01:00:48.926 [2024-12-09 05:59:43.420311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:48.926 [2024-12-09 05:59:43.459940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:48.926 [2024-12-09 05:59:43.503386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:49.185  [2024-12-09T05:59:43.772Z] Copying: 4096/4096 [B] (average 4000 kBps) 01:00:49.185 01:00:49.185 05:59:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 01:00:49.185 ************************************ 01:00:49.185 END TEST dd_rw_offset 01:00:49.186 05:59:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 3qtbfwjwssv7orplpvtpwtdgkmw6r6t78ptm18g7mfsml749iyvw6mf8wngl1r64mqw21zsa08tc380b5zyvlj0ubs6r9ju7dutpb5gg50p7c4sjkvy5rntg8gfafh551m7op9glx1ml3i699dy0eciii2dz0ye8vt024huzs6n8vb3zc5o8vm193mmqudhr2ltiobxxxcfb6zg2iic44ohk4rynvccjh1ooljxu66j6snd3033ufx5665d27lv438ertgbd2gs667o300p9qq5kxj0wkdtbtx8wguw5jws9ppbydslltmnmvun1wpuytc44up8b8vne2gy6ef8b0oyzxw9uc9h332in30wfynzetidw4ppbcay1wfegp0a7tsiq8zw8pczxt2lemqvbwkfomckwsok0f3pgmuqt2m11rspwhzk64fi24wgdcfi0o109858hxsbtcfxzbkr2f9wo7x1byfvcuj8x4ukdc7d2klihhmui43assm7vfihuwriqdww7gn6df3cg7yu94h4z1bk0oa30amr3749ue98jz1hfh1m877xjseqex34zsot7fwxbho6db0busxyn1irzj9okmjee41zilqq8s0fopoj0my2e5d801wfiortbe89fm21y7p385adicmte3ohgtvc321ludzna4lkw6n4tko4vqt2jp8667kkx4anfjuunqlbuzbms4l9huegarc2u1e84a1f3hqtr268gnxbrb4w61eri2bmyood5wey3p4u1oer08sod8nh1is8ejznuotfcg82lprfc1f7uq0dmqko6nbkx7d6ntr6gipk7yb8olx56vq131hgat5ay9q2u2sjhtyluiz94k2pjwp2jf88ft5frtmng9871jlqcbcgml0x7o5xkqr1sifyc3l0gmoxwpm1kt4d3zqiq2ndy2xifnp055hlvopwe7il631xm67plevotwz9hcuuwae4w8jk4zb5pbtufs6jcavzlikl0n9diuvezqhxtbu7s1hjqatutj0519pb6w8yr3x8o24x8ttp2ihbjqr96xs6wlscs52io21q7dgj70iqpumam0fjz1w1tm0lnesplz2ycasp1krpz6j3jqugpqlyy9u2jryiedc6kbpaec4elwjj1vfri1wr8k967v98ohhjngzu41lkgeptfqau7wtxvopw118nl5x7jn4irft5qyp6dotn2yuci5k7zbtike973lyfuwtqj11sjgppzhx6ipmoj2zavhe8ojjjl2ysjyb2hh0kxul1hqfltcuftpwaqjff9xa3sadjpqf9x2vd4nhacbvgpdthmbugll4fnvxd18hj5f6s70b6ingf1tzyousfaw0sd5lscb2rrs0yupbq8oaq3dgmlwf9ywiv8n1txcrt4pubmflw4obvgfp2h7ftygaf5xbkofd0e7ie21r31ma76yfbi7l1zqv5y25mkilf2fuzer1qvsno015bn1zshb74xs33ojplwztz0bdtw5jhgnm1o91aqngc6vvzgul8x510hdvcstixco9gxf5gs84qow7iqa3huchcbilogww5pgi4y3w1dm65218dnc47qtacbrljlyogd8h2zw336l08nmw0uulroaflv7hxx92nsg32r49i2njs64ze4tnq7gl2egdeh61j0go1u9lnxpj7bezwz3pyq6pw7t12v6kkgq2builkmjumczydw3tabdn7iwo8ppljv8dnem62m9tgfa32c1n0anldqaujqlphge8ck53f3ksg5vem1w1oh5i4c8tc0kiatfqe8cactj3p0uiu584htlmu43qhu458b8hid52mdolzh99joatm0y1ale6pz0s4qe62bs6jr9jh7jt9gmgakrkltred6a7p4dkgby5pawon0jkvcc4tbni4f6cnf66et63rzndx9okydhlu2vrcfn1bpy7avsyuxu15yajuto70dyuf6f5p4m4jotcd7l1q6ef6kkj39h0lxbzprvw6sumrxutpwy82o0y0r8sj8hm70ouf3h1mcsgmiu4u1525rlwzgsgdw1jlfpf662od5h3fimxnazsc358ujglrtakbsetqpvfmi9yia7awcev9mdrfltlhyo64aic2qgxb7ohj4hu4tc54evk7c78p4rz3lv4v6005d2yv81fd59je0watjas9q5gwo1rz3laf3t2sg0upwyfxnxfpq4h7qelw4k864u0foxuqd8a0fatncimkdsg8zhcrw62di6tpal6akmf6zvcd9dm9wa6z077wf05lgmdbjqe1b006h4vybi025e1zxq3nye32ium8ltx6lviheozhmthtilutgdwhel0fvqzx89wd8x6u8eir9koeouvw6c3lbydpt7oq23vvk8z3wtielb4n25b41w1f0gqdxvon66ed293mmc20td6nkly5bt9i89dc2qphwr9f9voyyrgexuk1jhqp7n8ktkax9ss03a20wet3lx09x8z9eu0uu78llypcvnz23x2oalpgspqpmy4j3dd7yrwpfaa81h4dsy4tat451es6b7jb7ssvp7gnh7we1rfu2x6bix69olivjx7zfgi2ui4u9bekmoetq72ijjiz3l9ag34d9lefzwt9okkha025y4obb6tx5j4tqzfx36yu74h82mmfge632mkzmm7uskjf2k2tvpbuesbulpu7anjg47j209pqj1xrvauwq4l2d4orp5fxv06bfuerabe60gzzrny7xtmz7hrah55cgl2lr15xqagi4bsj92v3ce1h5wkqd45yp5snq5s6rj8cqr93akr4gtmmmj1i376adoc91bxe8ke8lokxmtch2kkiugg0tk1v8oki5w5ukaili3ytv029bnpc61iyrlnf8xjg36pmk0e3xtl6mbzgskutj6akcy1f0qaks72bwdb1dxuiqrmpgdkkjt4fus503behdxsz79pyyd7udwa80uw4twc0vlkwt9rzn9eb8a4902wvewp0cmbdxhg9rk9hjg82so6my8ixgnfldb1m55i8brqfhz7rodvshappxksxhpfvo3hjmmwrgzztdhh4kvsgr27cmzi4irv7yqa9x74sooisfe7op2t0gntg90jsj7yskxax6fj7hsk2zn8e0x23snocmdb3jipsrzunjprhk3gjxrydda830dkjery37juyei4nb1un7rczrulm8vp8d0qo9gezyf0tdbp6l3uibyh6zqgw54qaqirwt4qopdxv4u996xlf6xolk9xu770wyg0tf2cewzcv0m6rj3np235udxid1tietcd4i5lkiv369jvx32qcgc5n516jgse3t2uzzb3jku2gzhwpjcjvhxoe1od2fvsym4432ca40s7bnyn3djo6mu68zrzg2wmn9ir0m3q0wg5kpvl1rqzmuzeb6ud3d4j91pn3m2lumylqff5gt34q9jf1hp1fzz2aoj6t6crumv8njhrqv58h12cgygkkawe5c9jd1cq6nwb6btbs8ymwhwsoxk0qwvrwdg4kkf00gxvsr795dbacocux8zsaq4hncnqidloexmxdnp8ksug2zatgm79rvvg1k90taf9kgvr5hrb8dx8ui7zd767tp7yh1tdjc0k61uqbayihl5sx88644psckfh5djgxcj3om6een0jaw65bzuwf5u1pftrtsyjfnk02imtw95fqjjtde21gnpt1axh9ueejga17xxf5eta4or3dn5wxagt3myf11938emsh38bu1ixqihlakolk0g7xyzk1vfv35cip6q7vvtlaxbwgg7ylkiwb2irntupnydezidf7momww8e6e7ycl8l6j124ebjzqbpuy23nzyy3w95iviixo315drj88yb65x77x8v0stofvmabq0p6g8od3b0puwrmhi3fter7squhjtavg83g1ahh1pnxqv4zn00e38ubapkh0n30g38q61yv4aj2gv00sp1i1oem4jzeg4cjfccooyte40fzk06v8x0s73ga58qyqzmjshr38nruy8ezrtn28y95tpxmfw8586tuslfel == \3\q\t\b\f\w\j\w\s\s\v\7\o\r\p\l\p\v\t\p\w\t\d\g\k\m\w\6\r\6\t\7\8\p\t\m\1\8\g\7\m\f\s\m\l\7\4\9\i\y\v\w\6\m\f\8\w\n\g\l\1\r\6\4\m\q\w\2\1\z\s\a\0\8\t\c\3\8\0\b\5\z\y\v\l\j\0\u\b\s\6\r\9\j\u\7\d\u\t\p\b\5\g\g\5\0\p\7\c\4\s\j\k\v\y\5\r\n\t\g\8\g\f\a\f\h\5\5\1\m\7\o\p\9\g\l\x\1\m\l\3\i\6\9\9\d\y\0\e\c\i\i\i\2\d\z\0\y\e\8\v\t\0\2\4\h\u\z\s\6\n\8\v\b\3\z\c\5\o\8\v\m\1\9\3\m\m\q\u\d\h\r\2\l\t\i\o\b\x\x\x\c\f\b\6\z\g\2\i\i\c\4\4\o\h\k\4\r\y\n\v\c\c\j\h\1\o\o\l\j\x\u\6\6\j\6\s\n\d\3\0\3\3\u\f\x\5\6\6\5\d\2\7\l\v\4\3\8\e\r\t\g\b\d\2\g\s\6\6\7\o\3\0\0\p\9\q\q\5\k\x\j\0\w\k\d\t\b\t\x\8\w\g\u\w\5\j\w\s\9\p\p\b\y\d\s\l\l\t\m\n\m\v\u\n\1\w\p\u\y\t\c\4\4\u\p\8\b\8\v\n\e\2\g\y\6\e\f\8\b\0\o\y\z\x\w\9\u\c\9\h\3\3\2\i\n\3\0\w\f\y\n\z\e\t\i\d\w\4\p\p\b\c\a\y\1\w\f\e\g\p\0\a\7\t\s\i\q\8\z\w\8\p\c\z\x\t\2\l\e\m\q\v\b\w\k\f\o\m\c\k\w\s\o\k\0\f\3\p\g\m\u\q\t\2\m\1\1\r\s\p\w\h\z\k\6\4\f\i\2\4\w\g\d\c\f\i\0\o\1\0\9\8\5\8\h\x\s\b\t\c\f\x\z\b\k\r\2\f\9\w\o\7\x\1\b\y\f\v\c\u\j\8\x\4\u\k\d\c\7\d\2\k\l\i\h\h\m\u\i\4\3\a\s\s\m\7\v\f\i\h\u\w\r\i\q\d\w\w\7\g\n\6\d\f\3\c\g\7\y\u\9\4\h\4\z\1\b\k\0\o\a\3\0\a\m\r\3\7\4\9\u\e\9\8\j\z\1\h\f\h\1\m\8\7\7\x\j\s\e\q\e\x\3\4\z\s\o\t\7\f\w\x\b\h\o\6\d\b\0\b\u\s\x\y\n\1\i\r\z\j\9\o\k\m\j\e\e\4\1\z\i\l\q\q\8\s\0\f\o\p\o\j\0\m\y\2\e\5\d\8\0\1\w\f\i\o\r\t\b\e\8\9\f\m\2\1\y\7\p\3\8\5\a\d\i\c\m\t\e\3\o\h\g\t\v\c\3\2\1\l\u\d\z\n\a\4\l\k\w\6\n\4\t\k\o\4\v\q\t\2\j\p\8\6\6\7\k\k\x\4\a\n\f\j\u\u\n\q\l\b\u\z\b\m\s\4\l\9\h\u\e\g\a\r\c\2\u\1\e\8\4\a\1\f\3\h\q\t\r\2\6\8\g\n\x\b\r\b\4\w\6\1\e\r\i\2\b\m\y\o\o\d\5\w\e\y\3\p\4\u\1\o\e\r\0\8\s\o\d\8\n\h\1\i\s\8\e\j\z\n\u\o\t\f\c\g\8\2\l\p\r\f\c\1\f\7\u\q\0\d\m\q\k\o\6\n\b\k\x\7\d\6\n\t\r\6\g\i\p\k\7\y\b\8\o\l\x\5\6\v\q\1\3\1\h\g\a\t\5\a\y\9\q\2\u\2\s\j\h\t\y\l\u\i\z\9\4\k\2\p\j\w\p\2\j\f\8\8\f\t\5\f\r\t\m\n\g\9\8\7\1\j\l\q\c\b\c\g\m\l\0\x\7\o\5\x\k\q\r\1\s\i\f\y\c\3\l\0\g\m\o\x\w\p\m\1\k\t\4\d\3\z\q\i\q\2\n\d\y\2\x\i\f\n\p\0\5\5\h\l\v\o\p\w\e\7\i\l\6\3\1\x\m\6\7\p\l\e\v\o\t\w\z\9\h\c\u\u\w\a\e\4\w\8\j\k\4\z\b\5\p\b\t\u\f\s\6\j\c\a\v\z\l\i\k\l\0\n\9\d\i\u\v\e\z\q\h\x\t\b\u\7\s\1\h\j\q\a\t\u\t\j\0\5\1\9\p\b\6\w\8\y\r\3\x\8\o\2\4\x\8\t\t\p\2\i\h\b\j\q\r\9\6\x\s\6\w\l\s\c\s\5\2\i\o\2\1\q\7\d\g\j\7\0\i\q\p\u\m\a\m\0\f\j\z\1\w\1\t\m\0\l\n\e\s\p\l\z\2\y\c\a\s\p\1\k\r\p\z\6\j\3\j\q\u\g\p\q\l\y\y\9\u\2\j\r\y\i\e\d\c\6\k\b\p\a\e\c\4\e\l\w\j\j\1\v\f\r\i\1\w\r\8\k\9\6\7\v\9\8\o\h\h\j\n\g\z\u\4\1\l\k\g\e\p\t\f\q\a\u\7\w\t\x\v\o\p\w\1\1\8\n\l\5\x\7\j\n\4\i\r\f\t\5\q\y\p\6\d\o\t\n\2\y\u\c\i\5\k\7\z\b\t\i\k\e\9\7\3\l\y\f\u\w\t\q\j\1\1\s\j\g\p\p\z\h\x\6\i\p\m\o\j\2\z\a\v\h\e\8\o\j\j\j\l\2\y\s\j\y\b\2\h\h\0\k\x\u\l\1\h\q\f\l\t\c\u\f\t\p\w\a\q\j\f\f\9\x\a\3\s\a\d\j\p\q\f\9\x\2\v\d\4\n\h\a\c\b\v\g\p\d\t\h\m\b\u\g\l\l\4\f\n\v\x\d\1\8\h\j\5\f\6\s\7\0\b\6\i\n\g\f\1\t\z\y\o\u\s\f\a\w\0\s\d\5\l\s\c\b\2\r\r\s\0\y\u\p\b\q\8\o\a\q\3\d\g\m\l\w\f\9\y\w\i\v\8\n\1\t\x\c\r\t\4\p\u\b\m\f\l\w\4\o\b\v\g\f\p\2\h\7\f\t\y\g\a\f\5\x\b\k\o\f\d\0\e\7\i\e\2\1\r\3\1\m\a\7\6\y\f\b\i\7\l\1\z\q\v\5\y\2\5\m\k\i\l\f\2\f\u\z\e\r\1\q\v\s\n\o\0\1\5\b\n\1\z\s\h\b\7\4\x\s\3\3\o\j\p\l\w\z\t\z\0\b\d\t\w\5\j\h\g\n\m\1\o\9\1\a\q\n\g\c\6\v\v\z\g\u\l\8\x\5\1\0\h\d\v\c\s\t\i\x\c\o\9\g\x\f\5\g\s\8\4\q\o\w\7\i\q\a\3\h\u\c\h\c\b\i\l\o\g\w\w\5\p\g\i\4\y\3\w\1\d\m\6\5\2\1\8\d\n\c\4\7\q\t\a\c\b\r\l\j\l\y\o\g\d\8\h\2\z\w\3\3\6\l\0\8\n\m\w\0\u\u\l\r\o\a\f\l\v\7\h\x\x\9\2\n\s\g\3\2\r\4\9\i\2\n\j\s\6\4\z\e\4\t\n\q\7\g\l\2\e\g\d\e\h\6\1\j\0\g\o\1\u\9\l\n\x\p\j\7\b\e\z\w\z\3\p\y\q\6\p\w\7\t\1\2\v\6\k\k\g\q\2\b\u\i\l\k\m\j\u\m\c\z\y\d\w\3\t\a\b\d\n\7\i\w\o\8\p\p\l\j\v\8\d\n\e\m\6\2\m\9\t\g\f\a\3\2\c\1\n\0\a\n\l\d\q\a\u\j\q\l\p\h\g\e\8\c\k\5\3\f\3\k\s\g\5\v\e\m\1\w\1\o\h\5\i\4\c\8\t\c\0\k\i\a\t\f\q\e\8\c\a\c\t\j\3\p\0\u\i\u\5\8\4\h\t\l\m\u\4\3\q\h\u\4\5\8\b\8\h\i\d\5\2\m\d\o\l\z\h\9\9\j\o\a\t\m\0\y\1\a\l\e\6\p\z\0\s\4\q\e\6\2\b\s\6\j\r\9\j\h\7\j\t\9\g\m\g\a\k\r\k\l\t\r\e\d\6\a\7\p\4\d\k\g\b\y\5\p\a\w\o\n\0\j\k\v\c\c\4\t\b\n\i\4\f\6\c\n\f\6\6\e\t\6\3\r\z\n\d\x\9\o\k\y\d\h\l\u\2\v\r\c\f\n\1\b\p\y\7\a\v\s\y\u\x\u\1\5\y\a\j\u\t\o\7\0\d\y\u\f\6\f\5\p\4\m\4\j\o\t\c\d\7\l\1\q\6\e\f\6\k\k\j\3\9\h\0\l\x\b\z\p\r\v\w\6\s\u\m\r\x\u\t\p\w\y\8\2\o\0\y\0\r\8\s\j\8\h\m\7\0\o\u\f\3\h\1\m\c\s\g\m\i\u\4\u\1\5\2\5\r\l\w\z\g\s\g\d\w\1\j\l\f\p\f\6\6\2\o\d\5\h\3\f\i\m\x\n\a\z\s\c\3\5\8\u\j\g\l\r\t\a\k\b\s\e\t\q\p\v\f\m\i\9\y\i\a\7\a\w\c\e\v\9\m\d\r\f\l\t\l\h\y\o\6\4\a\i\c\2\q\g\x\b\7\o\h\j\4\h\u\4\t\c\5\4\e\v\k\7\c\7\8\p\4\r\z\3\l\v\4\v\6\0\0\5\d\2\y\v\8\1\f\d\5\9\j\e\0\w\a\t\j\a\s\9\q\5\g\w\o\1\r\z\3\l\a\f\3\t\2\s\g\0\u\p\w\y\f\x\n\x\f\p\q\4\h\7\q\e\l\w\4\k\8\6\4\u\0\f\o\x\u\q\d\8\a\0\f\a\t\n\c\i\m\k\d\s\g\8\z\h\c\r\w\6\2\d\i\6\t\p\a\l\6\a\k\m\f\6\z\v\c\d\9\d\m\9\w\a\6\z\0\7\7\w\f\0\5\l\g\m\d\b\j\q\e\1\b\0\0\6\h\4\v\y\b\i\0\2\5\e\1\z\x\q\3\n\y\e\3\2\i\u\m\8\l\t\x\6\l\v\i\h\e\o\z\h\m\t\h\t\i\l\u\t\g\d\w\h\e\l\0\f\v\q\z\x\8\9\w\d\8\x\6\u\8\e\i\r\9\k\o\e\o\u\v\w\6\c\3\l\b\y\d\p\t\7\o\q\2\3\v\v\k\8\z\3\w\t\i\e\l\b\4\n\2\5\b\4\1\w\1\f\0\g\q\d\x\v\o\n\6\6\e\d\2\9\3\m\m\c\2\0\t\d\6\n\k\l\y\5\b\t\9\i\8\9\d\c\2\q\p\h\w\r\9\f\9\v\o\y\y\r\g\e\x\u\k\1\j\h\q\p\7\n\8\k\t\k\a\x\9\s\s\0\3\a\2\0\w\e\t\3\l\x\0\9\x\8\z\9\e\u\0\u\u\7\8\l\l\y\p\c\v\n\z\2\3\x\2\o\a\l\p\g\s\p\q\p\m\y\4\j\3\d\d\7\y\r\w\p\f\a\a\8\1\h\4\d\s\y\4\t\a\t\4\5\1\e\s\6\b\7\j\b\7\s\s\v\p\7\g\n\h\7\w\e\1\r\f\u\2\x\6\b\i\x\6\9\o\l\i\v\j\x\7\z\f\g\i\2\u\i\4\u\9\b\e\k\m\o\e\t\q\7\2\i\j\j\i\z\3\l\9\a\g\3\4\d\9\l\e\f\z\w\t\9\o\k\k\h\a\0\2\5\y\4\o\b\b\6\t\x\5\j\4\t\q\z\f\x\3\6\y\u\7\4\h\8\2\m\m\f\g\e\6\3\2\m\k\z\m\m\7\u\s\k\j\f\2\k\2\t\v\p\b\u\e\s\b\u\l\p\u\7\a\n\j\g\4\7\j\2\0\9\p\q\j\1\x\r\v\a\u\w\q\4\l\2\d\4\o\r\p\5\f\x\v\0\6\b\f\u\e\r\a\b\e\6\0\g\z\z\r\n\y\7\x\t\m\z\7\h\r\a\h\5\5\c\g\l\2\l\r\1\5\x\q\a\g\i\4\b\s\j\9\2\v\3\c\e\1\h\5\w\k\q\d\4\5\y\p\5\s\n\q\5\s\6\r\j\8\c\q\r\9\3\a\k\r\4\g\t\m\m\m\j\1\i\3\7\6\a\d\o\c\9\1\b\x\e\8\k\e\8\l\o\k\x\m\t\c\h\2\k\k\i\u\g\g\0\t\k\1\v\8\o\k\i\5\w\5\u\k\a\i\l\i\3\y\t\v\0\2\9\b\n\p\c\6\1\i\y\r\l\n\f\8\x\j\g\3\6\p\m\k\0\e\3\x\t\l\6\m\b\z\g\s\k\u\t\j\6\a\k\c\y\1\f\0\q\a\k\s\7\2\b\w\d\b\1\d\x\u\i\q\r\m\p\g\d\k\k\j\t\4\f\u\s\5\0\3\b\e\h\d\x\s\z\7\9\p\y\y\d\7\u\d\w\a\8\0\u\w\4\t\w\c\0\v\l\k\w\t\9\r\z\n\9\e\b\8\a\4\9\0\2\w\v\e\w\p\0\c\m\b\d\x\h\g\9\r\k\9\h\j\g\8\2\s\o\6\m\y\8\i\x\g\n\f\l\d\b\1\m\5\5\i\8\b\r\q\f\h\z\7\r\o\d\v\s\h\a\p\p\x\k\s\x\h\p\f\v\o\3\h\j\m\m\w\r\g\z\z\t\d\h\h\4\k\v\s\g\r\2\7\c\m\z\i\4\i\r\v\7\y\q\a\9\x\7\4\s\o\o\i\s\f\e\7\o\p\2\t\0\g\n\t\g\9\0\j\s\j\7\y\s\k\x\a\x\6\f\j\7\h\s\k\2\z\n\8\e\0\x\2\3\s\n\o\c\m\d\b\3\j\i\p\s\r\z\u\n\j\p\r\h\k\3\g\j\x\r\y\d\d\a\8\3\0\d\k\j\e\r\y\3\7\j\u\y\e\i\4\n\b\1\u\n\7\r\c\z\r\u\l\m\8\v\p\8\d\0\q\o\9\g\e\z\y\f\0\t\d\b\p\6\l\3\u\i\b\y\h\6\z\q\g\w\5\4\q\a\q\i\r\w\t\4\q\o\p\d\x\v\4\u\9\9\6\x\l\f\6\x\o\l\k\9\x\u\7\7\0\w\y\g\0\t\f\2\c\e\w\z\c\v\0\m\6\r\j\3\n\p\2\3\5\u\d\x\i\d\1\t\i\e\t\c\d\4\i\5\l\k\i\v\3\6\9\j\v\x\3\2\q\c\g\c\5\n\5\1\6\j\g\s\e\3\t\2\u\z\z\b\3\j\k\u\2\g\z\h\w\p\j\c\j\v\h\x\o\e\1\o\d\2\f\v\s\y\m\4\4\3\2\c\a\4\0\s\7\b\n\y\n\3\d\j\o\6\m\u\6\8\z\r\z\g\2\w\m\n\9\i\r\0\m\3\q\0\w\g\5\k\p\v\l\1\r\q\z\m\u\z\e\b\6\u\d\3\d\4\j\9\1\p\n\3\m\2\l\u\m\y\l\q\f\f\5\g\t\3\4\q\9\j\f\1\h\p\1\f\z\z\2\a\o\j\6\t\6\c\r\u\m\v\8\n\j\h\r\q\v\5\8\h\1\2\c\g\y\g\k\k\a\w\e\5\c\9\j\d\1\c\q\6\n\w\b\6\b\t\b\s\8\y\m\w\h\w\s\o\x\k\0\q\w\v\r\w\d\g\4\k\k\f\0\0\g\x\v\s\r\7\9\5\d\b\a\c\o\c\u\x\8\z\s\a\q\4\h\n\c\n\q\i\d\l\o\e\x\m\x\d\n\p\8\k\s\u\g\2\z\a\t\g\m\7\9\r\v\v\g\1\k\9\0\t\a\f\9\k\g\v\r\5\h\r\b\8\d\x\8\u\i\7\z\d\7\6\7\t\p\7\y\h\1\t\d\j\c\0\k\6\1\u\q\b\a\y\i\h\l\5\s\x\8\8\6\4\4\p\s\c\k\f\h\5\d\j\g\x\c\j\3\o\m\6\e\e\n\0\j\a\w\6\5\b\z\u\w\f\5\u\1\p\f\t\r\t\s\y\j\f\n\k\0\2\i\m\t\w\9\5\f\q\j\j\t\d\e\2\1\g\n\p\t\1\a\x\h\9\u\e\e\j\g\a\1\7\x\x\f\5\e\t\a\4\o\r\3\d\n\5\w\x\a\g\t\3\m\y\f\1\1\9\3\8\e\m\s\h\3\8\b\u\1\i\x\q\i\h\l\a\k\o\l\k\0\g\7\x\y\z\k\1\v\f\v\3\5\c\i\p\6\q\7\v\v\t\l\a\x\b\w\g\g\7\y\l\k\i\w\b\2\i\r\n\t\u\p\n\y\d\e\z\i\d\f\7\m\o\m\w\w\8\e\6\e\7\y\c\l\8\l\6\j\1\2\4\e\b\j\z\q\b\p\u\y\2\3\n\z\y\y\3\w\9\5\i\v\i\i\x\o\3\1\5\d\r\j\8\8\y\b\6\5\x\7\7\x\8\v\0\s\t\o\f\v\m\a\b\q\0\p\6\g\8\o\d\3\b\0\p\u\w\r\m\h\i\3\f\t\e\r\7\s\q\u\h\j\t\a\v\g\8\3\g\1\a\h\h\1\p\n\x\q\v\4\z\n\0\0\e\3\8\u\b\a\p\k\h\0\n\3\0\g\3\8\q\6\1\y\v\4\a\j\2\g\v\0\0\s\p\1\i\1\o\e\m\4\j\z\e\g\4\c\j\f\c\c\o\o\y\t\e\4\0\f\z\k\0\6\v\8\x\0\s\7\3\g\a\5\8\q\y\q\z\m\j\s\h\r\3\8\n\r\u\y\8\e\z\r\t\n\2\8\y\9\5\t\p\x\m\f\w\8\5\8\6\t\u\s\l\f\e\l ]] 01:00:49.186 01:00:49.186 real 0m1.152s 01:00:49.186 user 0m0.746s 01:00:49.186 sys 0m0.543s 01:00:49.186 05:59:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:49.186 05:59:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:00:49.186 ************************************ 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 01:00:49.444 05:59:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:00:49.444 [2024-12-09 05:59:43.883634] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:49.444 [2024-12-09 05:59:43.883722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59929 ] 01:00:49.444 { 01:00:49.444 "subsystems": [ 01:00:49.444 { 01:00:49.444 "subsystem": "bdev", 01:00:49.444 "config": [ 01:00:49.444 { 01:00:49.444 "params": { 01:00:49.444 "trtype": "pcie", 01:00:49.444 "traddr": "0000:00:10.0", 01:00:49.444 "name": "Nvme0" 01:00:49.444 }, 01:00:49.444 "method": "bdev_nvme_attach_controller" 01:00:49.444 }, 01:00:49.444 { 01:00:49.444 "method": "bdev_wait_for_examine" 01:00:49.444 } 01:00:49.444 ] 01:00:49.444 } 01:00:49.444 ] 01:00:49.444 } 01:00:49.702 [2024-12-09 05:59:44.035040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:49.702 [2024-12-09 05:59:44.087916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:49.702 [2024-12-09 05:59:44.134968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:49.702  [2024-12-09T05:59:44.549Z] Copying: 1024/1024 [kB] (average 500 MBps) 01:00:49.962 01:00:49.962 05:59:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:49.962 01:00:49.962 real 0m15.547s 01:00:49.962 user 0m10.388s 01:00:49.962 sys 0m6.332s 01:00:49.962 05:59:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:49.962 05:59:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:00:49.962 ************************************ 01:00:49.962 END TEST spdk_dd_basic_rw 01:00:49.962 ************************************ 01:00:49.962 05:59:44 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 01:00:49.962 05:59:44 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:49.962 05:59:44 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:49.962 05:59:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:00:49.962 ************************************ 01:00:49.962 START TEST spdk_dd_posix 01:00:49.962 ************************************ 01:00:49.962 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 01:00:50.222 * Looking for test storage... 01:00:50.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:00:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:50.222 --rc genhtml_branch_coverage=1 01:00:50.222 --rc genhtml_function_coverage=1 01:00:50.222 --rc genhtml_legend=1 01:00:50.222 --rc geninfo_all_blocks=1 01:00:50.222 --rc geninfo_unexecuted_blocks=1 01:00:50.222 01:00:50.222 ' 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:00:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:50.222 --rc genhtml_branch_coverage=1 01:00:50.222 --rc genhtml_function_coverage=1 01:00:50.222 --rc genhtml_legend=1 01:00:50.222 --rc geninfo_all_blocks=1 01:00:50.222 --rc geninfo_unexecuted_blocks=1 01:00:50.222 01:00:50.222 ' 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:00:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:50.222 --rc genhtml_branch_coverage=1 01:00:50.222 --rc genhtml_function_coverage=1 01:00:50.222 --rc genhtml_legend=1 01:00:50.222 --rc geninfo_all_blocks=1 01:00:50.222 --rc geninfo_unexecuted_blocks=1 01:00:50.222 01:00:50.222 ' 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:00:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:50.222 --rc genhtml_branch_coverage=1 01:00:50.222 --rc genhtml_function_coverage=1 01:00:50.222 --rc genhtml_legend=1 01:00:50.222 --rc geninfo_all_blocks=1 01:00:50.222 --rc geninfo_unexecuted_blocks=1 01:00:50.222 01:00:50.222 ' 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:50.222 05:59:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 01:00:50.223 * First test run, liburing in use 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:00:50.223 ************************************ 01:00:50.223 START TEST dd_flag_append 01:00:50.223 ************************************ 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=pdvzegc14xggel49n3ppqytb1d7fw2lg 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=dlawg8myiaut1sxb3t8m97e8sgs0kdk8 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s pdvzegc14xggel49n3ppqytb1d7fw2lg 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s dlawg8myiaut1sxb3t8m97e8sgs0kdk8 01:00:50.223 05:59:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 01:00:50.223 [2024-12-09 05:59:44.797922] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:50.223 [2024-12-09 05:59:44.797989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60001 ] 01:00:50.483 [2024-12-09 05:59:44.946192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:50.483 [2024-12-09 05:59:44.987352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:50.483 [2024-12-09 05:59:45.029139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:50.483  [2024-12-09T05:59:45.334Z] Copying: 32/32 [B] (average 31 kBps) 01:00:50.747 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ dlawg8myiaut1sxb3t8m97e8sgs0kdk8pdvzegc14xggel49n3ppqytb1d7fw2lg == \d\l\a\w\g\8\m\y\i\a\u\t\1\s\x\b\3\t\8\m\9\7\e\8\s\g\s\0\k\d\k\8\p\d\v\z\e\g\c\1\4\x\g\g\e\l\4\9\n\3\p\p\q\y\t\b\1\d\7\f\w\2\l\g ]] 01:00:50.747 01:00:50.747 real 0m0.475s 01:00:50.747 user 0m0.249s 01:00:50.747 sys 0m0.232s 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:50.747 ************************************ 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 01:00:50.747 END TEST dd_flag_append 01:00:50.747 ************************************ 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:00:50.747 ************************************ 01:00:50.747 START TEST dd_flag_directory 01:00:50.747 ************************************ 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:00:50.747 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:51.009 [2024-12-09 05:59:45.347921] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:51.009 [2024-12-09 05:59:45.347986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60029 ] 01:00:51.009 [2024-12-09 05:59:45.498180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:51.009 [2024-12-09 05:59:45.547664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:51.267 [2024-12-09 05:59:45.595865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:51.267 [2024-12-09 05:59:45.626296] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:00:51.267 [2024-12-09 05:59:45.626340] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:00:51.267 [2024-12-09 05:59:45.626353] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:00:51.267 [2024-12-09 05:59:45.721863] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:00:51.267 05:59:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:00:51.267 [2024-12-09 05:59:45.833498] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:51.267 [2024-12-09 05:59:45.833566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60039 ] 01:00:51.526 [2024-12-09 05:59:45.981712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:51.526 [2024-12-09 05:59:46.024179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:51.526 [2024-12-09 05:59:46.067161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:51.526 [2024-12-09 05:59:46.096447] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:00:51.526 [2024-12-09 05:59:46.096486] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:00:51.526 [2024-12-09 05:59:46.096497] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:00:51.785 [2024-12-09 05:59:46.192639] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:00:51.785 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 01:00:51.785 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:51.785 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 01:00:51.785 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 01:00:51.785 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 01:00:51.785 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:51.785 01:00:51.785 real 0m0.960s 01:00:51.786 user 0m0.493s 01:00:51.786 sys 0m0.260s 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 01:00:51.786 ************************************ 01:00:51.786 END TEST dd_flag_directory 01:00:51.786 ************************************ 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:00:51.786 ************************************ 01:00:51.786 START TEST dd_flag_nofollow 01:00:51.786 ************************************ 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:00:51.786 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:52.044 [2024-12-09 05:59:46.397420] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:52.044 [2024-12-09 05:59:46.397506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60067 ] 01:00:52.044 [2024-12-09 05:59:46.548979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:52.044 [2024-12-09 05:59:46.595737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:52.302 [2024-12-09 05:59:46.643965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:52.302 [2024-12-09 05:59:46.674387] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:00:52.303 [2024-12-09 05:59:46.674620] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:00:52.303 [2024-12-09 05:59:46.674681] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:00:52.303 [2024-12-09 05:59:46.770293] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:00:52.303 05:59:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:00:52.303 [2024-12-09 05:59:46.886477] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:52.303 [2024-12-09 05:59:46.886562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60077 ] 01:00:52.561 [2024-12-09 05:59:47.036419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:52.561 [2024-12-09 05:59:47.083124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:52.561 [2024-12-09 05:59:47.128709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:52.819 [2024-12-09 05:59:47.158165] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:00:52.819 [2024-12-09 05:59:47.158205] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:00:52.820 [2024-12-09 05:59:47.158218] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:00:52.820 [2024-12-09 05:59:47.254461] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 01:00:52.820 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:52.820 [2024-12-09 05:59:47.364951] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:52.820 [2024-12-09 05:59:47.365014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60084 ] 01:00:53.078 [2024-12-09 05:59:47.512830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:53.078 [2024-12-09 05:59:47.555800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:53.078 [2024-12-09 05:59:47.599141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:53.078  [2024-12-09T05:59:47.924Z] Copying: 512/512 [B] (average 500 kBps) 01:00:53.337 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ zdv26sn9pe82obfr1rhs9syftq7ljemol2srb1j6unhi5wagdq5lw31ce9lf2jyv7so9owu7w57akqvimomtm2cuckepwy9y1hv7uwhm12zm6m5uekgjbh0nmw6r1hbu0332moyolrg64b9juo9t5lauadhvszacrni2yk84j8gi414lsyui27ljho31z3bm93etwbmgyd24h6tmsym02xtwb3c2burdmyzetf3popk0h9bic9gqcgh2gbnnagpxlawognooczs9wyxgv2jwdmp9g946pm4bc7118wyxcw7zmxoldohj539ts4yn5d0l2svlfea0ugpzjag0arps618la9ptmjzslyns10e6rw51ehtvp6o8vlewrqtxvu18ht3h9ruyj6fcnveq05gh93j64deqsmtor0wbs99jafk2edwpzu0tba1xrnni6mcf7u17e5xgncgoi6e6u0l06lwjura2xdp5ghpqg3p22mhb8t5t1snwcb3mvqtuiv2g == \z\d\v\2\6\s\n\9\p\e\8\2\o\b\f\r\1\r\h\s\9\s\y\f\t\q\7\l\j\e\m\o\l\2\s\r\b\1\j\6\u\n\h\i\5\w\a\g\d\q\5\l\w\3\1\c\e\9\l\f\2\j\y\v\7\s\o\9\o\w\u\7\w\5\7\a\k\q\v\i\m\o\m\t\m\2\c\u\c\k\e\p\w\y\9\y\1\h\v\7\u\w\h\m\1\2\z\m\6\m\5\u\e\k\g\j\b\h\0\n\m\w\6\r\1\h\b\u\0\3\3\2\m\o\y\o\l\r\g\6\4\b\9\j\u\o\9\t\5\l\a\u\a\d\h\v\s\z\a\c\r\n\i\2\y\k\8\4\j\8\g\i\4\1\4\l\s\y\u\i\2\7\l\j\h\o\3\1\z\3\b\m\9\3\e\t\w\b\m\g\y\d\2\4\h\6\t\m\s\y\m\0\2\x\t\w\b\3\c\2\b\u\r\d\m\y\z\e\t\f\3\p\o\p\k\0\h\9\b\i\c\9\g\q\c\g\h\2\g\b\n\n\a\g\p\x\l\a\w\o\g\n\o\o\c\z\s\9\w\y\x\g\v\2\j\w\d\m\p\9\g\9\4\6\p\m\4\b\c\7\1\1\8\w\y\x\c\w\7\z\m\x\o\l\d\o\h\j\5\3\9\t\s\4\y\n\5\d\0\l\2\s\v\l\f\e\a\0\u\g\p\z\j\a\g\0\a\r\p\s\6\1\8\l\a\9\p\t\m\j\z\s\l\y\n\s\1\0\e\6\r\w\5\1\e\h\t\v\p\6\o\8\v\l\e\w\r\q\t\x\v\u\1\8\h\t\3\h\9\r\u\y\j\6\f\c\n\v\e\q\0\5\g\h\9\3\j\6\4\d\e\q\s\m\t\o\r\0\w\b\s\9\9\j\a\f\k\2\e\d\w\p\z\u\0\t\b\a\1\x\r\n\n\i\6\m\c\f\7\u\1\7\e\5\x\g\n\c\g\o\i\6\e\6\u\0\l\0\6\l\w\j\u\r\a\2\x\d\p\5\g\h\p\q\g\3\p\2\2\m\h\b\8\t\5\t\1\s\n\w\c\b\3\m\v\q\t\u\i\v\2\g ]] 01:00:53.337 01:00:53.337 real 0m1.455s 01:00:53.337 user 0m0.753s 01:00:53.337 sys 0m0.498s 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:53.337 ************************************ 01:00:53.337 END TEST dd_flag_nofollow 01:00:53.337 ************************************ 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:00:53.337 ************************************ 01:00:53.337 START TEST dd_flag_noatime 01:00:53.337 ************************************ 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733723987 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733723987 01:00:53.337 05:59:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 01:00:54.720 05:59:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:54.720 [2024-12-09 05:59:48.944586] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:54.720 [2024-12-09 05:59:48.944659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60127 ] 01:00:54.720 [2024-12-09 05:59:49.092799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:54.720 [2024-12-09 05:59:49.142037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:54.720 [2024-12-09 05:59:49.188721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:54.720  [2024-12-09T05:59:49.567Z] Copying: 512/512 [B] (average 500 kBps) 01:00:54.980 01:00:54.980 05:59:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:54.980 05:59:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733723987 )) 01:00:54.980 05:59:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:54.980 05:59:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733723987 )) 01:00:54.980 05:59:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:00:54.980 [2024-12-09 05:59:49.436302] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:54.980 [2024-12-09 05:59:49.436379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60139 ] 01:00:55.240 [2024-12-09 05:59:49.587971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:55.240 [2024-12-09 05:59:49.631307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:55.240 [2024-12-09 05:59:49.674473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:55.240  [2024-12-09T05:59:50.097Z] Copying: 512/512 [B] (average 500 kBps) 01:00:55.510 01:00:55.510 05:59:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:55.510 05:59:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733723989 )) 01:00:55.510 01:00:55.510 real 0m2.006s 01:00:55.510 user 0m0.494s 01:00:55.510 sys 0m0.516s 01:00:55.510 05:59:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:55.510 05:59:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 01:00:55.510 ************************************ 01:00:55.510 END TEST dd_flag_noatime 01:00:55.510 ************************************ 01:00:55.510 05:59:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:00:55.511 ************************************ 01:00:55.511 START TEST dd_flags_misc 01:00:55.511 ************************************ 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:00:55.511 05:59:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:00:55.511 [2024-12-09 05:59:50.011837] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:55.511 [2024-12-09 05:59:50.011919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60169 ] 01:00:55.786 [2024-12-09 05:59:50.164448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:55.786 [2024-12-09 05:59:50.207363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:55.786 [2024-12-09 05:59:50.249940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:55.786  [2024-12-09T05:59:50.632Z] Copying: 512/512 [B] (average 500 kBps) 01:00:56.045 01:00:56.046 05:59:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3uxgmipw0xtfsai7853lx5xvz131w5egyh77c11e3awz2sy5qdlmpj4fwq5d5mks4cjljiupj7nlgv9pauxe43om8go81dyovm3z0ctl4h6opk7cr3rfwok39df1flo9wi0i32zdiccilmjpcekhiul8jyqtnnc8gj4qw5pjshlsanm557nzabqgkwr2e124mfb1yg7p6eploa7xzih3qdqjk7w4cdca2g7er78dfn1j7ddavavtuy3o4j9x3kgp7x860d58ku78rfaq01ucdh1gyhnev120weqrgn4jixsuev0dkhue1u4wpdfa04341x6jgdhu5cio4ncxr8lxqh1f1bjlyq83b0myu3sb9f5xz2scwqa1m47elm39dqh5xxs6a6b9eh37cewsp9nnao2ok45c58sfrohigvfovjn2idmj6bty29ftp8auxfuo7pw2p0yiz8pc0p2pmnuxmvpq7sfqjsenn4of3uh1e0y4y0tt7owmf3rfqrrbnv99 == \3\u\x\g\m\i\p\w\0\x\t\f\s\a\i\7\8\5\3\l\x\5\x\v\z\1\3\1\w\5\e\g\y\h\7\7\c\1\1\e\3\a\w\z\2\s\y\5\q\d\l\m\p\j\4\f\w\q\5\d\5\m\k\s\4\c\j\l\j\i\u\p\j\7\n\l\g\v\9\p\a\u\x\e\4\3\o\m\8\g\o\8\1\d\y\o\v\m\3\z\0\c\t\l\4\h\6\o\p\k\7\c\r\3\r\f\w\o\k\3\9\d\f\1\f\l\o\9\w\i\0\i\3\2\z\d\i\c\c\i\l\m\j\p\c\e\k\h\i\u\l\8\j\y\q\t\n\n\c\8\g\j\4\q\w\5\p\j\s\h\l\s\a\n\m\5\5\7\n\z\a\b\q\g\k\w\r\2\e\1\2\4\m\f\b\1\y\g\7\p\6\e\p\l\o\a\7\x\z\i\h\3\q\d\q\j\k\7\w\4\c\d\c\a\2\g\7\e\r\7\8\d\f\n\1\j\7\d\d\a\v\a\v\t\u\y\3\o\4\j\9\x\3\k\g\p\7\x\8\6\0\d\5\8\k\u\7\8\r\f\a\q\0\1\u\c\d\h\1\g\y\h\n\e\v\1\2\0\w\e\q\r\g\n\4\j\i\x\s\u\e\v\0\d\k\h\u\e\1\u\4\w\p\d\f\a\0\4\3\4\1\x\6\j\g\d\h\u\5\c\i\o\4\n\c\x\r\8\l\x\q\h\1\f\1\b\j\l\y\q\8\3\b\0\m\y\u\3\s\b\9\f\5\x\z\2\s\c\w\q\a\1\m\4\7\e\l\m\3\9\d\q\h\5\x\x\s\6\a\6\b\9\e\h\3\7\c\e\w\s\p\9\n\n\a\o\2\o\k\4\5\c\5\8\s\f\r\o\h\i\g\v\f\o\v\j\n\2\i\d\m\j\6\b\t\y\2\9\f\t\p\8\a\u\x\f\u\o\7\p\w\2\p\0\y\i\z\8\p\c\0\p\2\p\m\n\u\x\m\v\p\q\7\s\f\q\j\s\e\n\n\4\o\f\3\u\h\1\e\0\y\4\y\0\t\t\7\o\w\m\f\3\r\f\q\r\r\b\n\v\9\9 ]] 01:00:56.046 05:59:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:00:56.046 05:59:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:00:56.046 [2024-12-09 05:59:50.466916] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:56.046 [2024-12-09 05:59:50.466997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60177 ] 01:00:56.046 [2024-12-09 05:59:50.619370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:56.306 [2024-12-09 05:59:50.664445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:56.306 [2024-12-09 05:59:50.709034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:56.306  [2024-12-09T05:59:50.893Z] Copying: 512/512 [B] (average 500 kBps) 01:00:56.306 01:00:56.566 05:59:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3uxgmipw0xtfsai7853lx5xvz131w5egyh77c11e3awz2sy5qdlmpj4fwq5d5mks4cjljiupj7nlgv9pauxe43om8go81dyovm3z0ctl4h6opk7cr3rfwok39df1flo9wi0i32zdiccilmjpcekhiul8jyqtnnc8gj4qw5pjshlsanm557nzabqgkwr2e124mfb1yg7p6eploa7xzih3qdqjk7w4cdca2g7er78dfn1j7ddavavtuy3o4j9x3kgp7x860d58ku78rfaq01ucdh1gyhnev120weqrgn4jixsuev0dkhue1u4wpdfa04341x6jgdhu5cio4ncxr8lxqh1f1bjlyq83b0myu3sb9f5xz2scwqa1m47elm39dqh5xxs6a6b9eh37cewsp9nnao2ok45c58sfrohigvfovjn2idmj6bty29ftp8auxfuo7pw2p0yiz8pc0p2pmnuxmvpq7sfqjsenn4of3uh1e0y4y0tt7owmf3rfqrrbnv99 == \3\u\x\g\m\i\p\w\0\x\t\f\s\a\i\7\8\5\3\l\x\5\x\v\z\1\3\1\w\5\e\g\y\h\7\7\c\1\1\e\3\a\w\z\2\s\y\5\q\d\l\m\p\j\4\f\w\q\5\d\5\m\k\s\4\c\j\l\j\i\u\p\j\7\n\l\g\v\9\p\a\u\x\e\4\3\o\m\8\g\o\8\1\d\y\o\v\m\3\z\0\c\t\l\4\h\6\o\p\k\7\c\r\3\r\f\w\o\k\3\9\d\f\1\f\l\o\9\w\i\0\i\3\2\z\d\i\c\c\i\l\m\j\p\c\e\k\h\i\u\l\8\j\y\q\t\n\n\c\8\g\j\4\q\w\5\p\j\s\h\l\s\a\n\m\5\5\7\n\z\a\b\q\g\k\w\r\2\e\1\2\4\m\f\b\1\y\g\7\p\6\e\p\l\o\a\7\x\z\i\h\3\q\d\q\j\k\7\w\4\c\d\c\a\2\g\7\e\r\7\8\d\f\n\1\j\7\d\d\a\v\a\v\t\u\y\3\o\4\j\9\x\3\k\g\p\7\x\8\6\0\d\5\8\k\u\7\8\r\f\a\q\0\1\u\c\d\h\1\g\y\h\n\e\v\1\2\0\w\e\q\r\g\n\4\j\i\x\s\u\e\v\0\d\k\h\u\e\1\u\4\w\p\d\f\a\0\4\3\4\1\x\6\j\g\d\h\u\5\c\i\o\4\n\c\x\r\8\l\x\q\h\1\f\1\b\j\l\y\q\8\3\b\0\m\y\u\3\s\b\9\f\5\x\z\2\s\c\w\q\a\1\m\4\7\e\l\m\3\9\d\q\h\5\x\x\s\6\a\6\b\9\e\h\3\7\c\e\w\s\p\9\n\n\a\o\2\o\k\4\5\c\5\8\s\f\r\o\h\i\g\v\f\o\v\j\n\2\i\d\m\j\6\b\t\y\2\9\f\t\p\8\a\u\x\f\u\o\7\p\w\2\p\0\y\i\z\8\p\c\0\p\2\p\m\n\u\x\m\v\p\q\7\s\f\q\j\s\e\n\n\4\o\f\3\u\h\1\e\0\y\4\y\0\t\t\7\o\w\m\f\3\r\f\q\r\r\b\n\v\9\9 ]] 01:00:56.566 05:59:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:00:56.566 05:59:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:00:56.566 [2024-12-09 05:59:50.947154] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:56.566 [2024-12-09 05:59:50.947238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60190 ] 01:00:56.566 [2024-12-09 05:59:51.096581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:56.566 [2024-12-09 05:59:51.142224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:56.825 [2024-12-09 05:59:51.187794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:56.825  [2024-12-09T05:59:51.412Z] Copying: 512/512 [B] (average 250 kBps) 01:00:56.825 01:00:56.825 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3uxgmipw0xtfsai7853lx5xvz131w5egyh77c11e3awz2sy5qdlmpj4fwq5d5mks4cjljiupj7nlgv9pauxe43om8go81dyovm3z0ctl4h6opk7cr3rfwok39df1flo9wi0i32zdiccilmjpcekhiul8jyqtnnc8gj4qw5pjshlsanm557nzabqgkwr2e124mfb1yg7p6eploa7xzih3qdqjk7w4cdca2g7er78dfn1j7ddavavtuy3o4j9x3kgp7x860d58ku78rfaq01ucdh1gyhnev120weqrgn4jixsuev0dkhue1u4wpdfa04341x6jgdhu5cio4ncxr8lxqh1f1bjlyq83b0myu3sb9f5xz2scwqa1m47elm39dqh5xxs6a6b9eh37cewsp9nnao2ok45c58sfrohigvfovjn2idmj6bty29ftp8auxfuo7pw2p0yiz8pc0p2pmnuxmvpq7sfqjsenn4of3uh1e0y4y0tt7owmf3rfqrrbnv99 == \3\u\x\g\m\i\p\w\0\x\t\f\s\a\i\7\8\5\3\l\x\5\x\v\z\1\3\1\w\5\e\g\y\h\7\7\c\1\1\e\3\a\w\z\2\s\y\5\q\d\l\m\p\j\4\f\w\q\5\d\5\m\k\s\4\c\j\l\j\i\u\p\j\7\n\l\g\v\9\p\a\u\x\e\4\3\o\m\8\g\o\8\1\d\y\o\v\m\3\z\0\c\t\l\4\h\6\o\p\k\7\c\r\3\r\f\w\o\k\3\9\d\f\1\f\l\o\9\w\i\0\i\3\2\z\d\i\c\c\i\l\m\j\p\c\e\k\h\i\u\l\8\j\y\q\t\n\n\c\8\g\j\4\q\w\5\p\j\s\h\l\s\a\n\m\5\5\7\n\z\a\b\q\g\k\w\r\2\e\1\2\4\m\f\b\1\y\g\7\p\6\e\p\l\o\a\7\x\z\i\h\3\q\d\q\j\k\7\w\4\c\d\c\a\2\g\7\e\r\7\8\d\f\n\1\j\7\d\d\a\v\a\v\t\u\y\3\o\4\j\9\x\3\k\g\p\7\x\8\6\0\d\5\8\k\u\7\8\r\f\a\q\0\1\u\c\d\h\1\g\y\h\n\e\v\1\2\0\w\e\q\r\g\n\4\j\i\x\s\u\e\v\0\d\k\h\u\e\1\u\4\w\p\d\f\a\0\4\3\4\1\x\6\j\g\d\h\u\5\c\i\o\4\n\c\x\r\8\l\x\q\h\1\f\1\b\j\l\y\q\8\3\b\0\m\y\u\3\s\b\9\f\5\x\z\2\s\c\w\q\a\1\m\4\7\e\l\m\3\9\d\q\h\5\x\x\s\6\a\6\b\9\e\h\3\7\c\e\w\s\p\9\n\n\a\o\2\o\k\4\5\c\5\8\s\f\r\o\h\i\g\v\f\o\v\j\n\2\i\d\m\j\6\b\t\y\2\9\f\t\p\8\a\u\x\f\u\o\7\p\w\2\p\0\y\i\z\8\p\c\0\p\2\p\m\n\u\x\m\v\p\q\7\s\f\q\j\s\e\n\n\4\o\f\3\u\h\1\e\0\y\4\y\0\t\t\7\o\w\m\f\3\r\f\q\r\r\b\n\v\9\9 ]] 01:00:56.825 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:00:56.825 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:00:57.084 [2024-12-09 05:59:51.424330] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:57.084 [2024-12-09 05:59:51.424406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60194 ] 01:00:57.084 [2024-12-09 05:59:51.573773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:57.084 [2024-12-09 05:59:51.620943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:57.084 [2024-12-09 05:59:51.666688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:57.344  [2024-12-09T05:59:51.931Z] Copying: 512/512 [B] (average 250 kBps) 01:00:57.344 01:00:57.344 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3uxgmipw0xtfsai7853lx5xvz131w5egyh77c11e3awz2sy5qdlmpj4fwq5d5mks4cjljiupj7nlgv9pauxe43om8go81dyovm3z0ctl4h6opk7cr3rfwok39df1flo9wi0i32zdiccilmjpcekhiul8jyqtnnc8gj4qw5pjshlsanm557nzabqgkwr2e124mfb1yg7p6eploa7xzih3qdqjk7w4cdca2g7er78dfn1j7ddavavtuy3o4j9x3kgp7x860d58ku78rfaq01ucdh1gyhnev120weqrgn4jixsuev0dkhue1u4wpdfa04341x6jgdhu5cio4ncxr8lxqh1f1bjlyq83b0myu3sb9f5xz2scwqa1m47elm39dqh5xxs6a6b9eh37cewsp9nnao2ok45c58sfrohigvfovjn2idmj6bty29ftp8auxfuo7pw2p0yiz8pc0p2pmnuxmvpq7sfqjsenn4of3uh1e0y4y0tt7owmf3rfqrrbnv99 == \3\u\x\g\m\i\p\w\0\x\t\f\s\a\i\7\8\5\3\l\x\5\x\v\z\1\3\1\w\5\e\g\y\h\7\7\c\1\1\e\3\a\w\z\2\s\y\5\q\d\l\m\p\j\4\f\w\q\5\d\5\m\k\s\4\c\j\l\j\i\u\p\j\7\n\l\g\v\9\p\a\u\x\e\4\3\o\m\8\g\o\8\1\d\y\o\v\m\3\z\0\c\t\l\4\h\6\o\p\k\7\c\r\3\r\f\w\o\k\3\9\d\f\1\f\l\o\9\w\i\0\i\3\2\z\d\i\c\c\i\l\m\j\p\c\e\k\h\i\u\l\8\j\y\q\t\n\n\c\8\g\j\4\q\w\5\p\j\s\h\l\s\a\n\m\5\5\7\n\z\a\b\q\g\k\w\r\2\e\1\2\4\m\f\b\1\y\g\7\p\6\e\p\l\o\a\7\x\z\i\h\3\q\d\q\j\k\7\w\4\c\d\c\a\2\g\7\e\r\7\8\d\f\n\1\j\7\d\d\a\v\a\v\t\u\y\3\o\4\j\9\x\3\k\g\p\7\x\8\6\0\d\5\8\k\u\7\8\r\f\a\q\0\1\u\c\d\h\1\g\y\h\n\e\v\1\2\0\w\e\q\r\g\n\4\j\i\x\s\u\e\v\0\d\k\h\u\e\1\u\4\w\p\d\f\a\0\4\3\4\1\x\6\j\g\d\h\u\5\c\i\o\4\n\c\x\r\8\l\x\q\h\1\f\1\b\j\l\y\q\8\3\b\0\m\y\u\3\s\b\9\f\5\x\z\2\s\c\w\q\a\1\m\4\7\e\l\m\3\9\d\q\h\5\x\x\s\6\a\6\b\9\e\h\3\7\c\e\w\s\p\9\n\n\a\o\2\o\k\4\5\c\5\8\s\f\r\o\h\i\g\v\f\o\v\j\n\2\i\d\m\j\6\b\t\y\2\9\f\t\p\8\a\u\x\f\u\o\7\p\w\2\p\0\y\i\z\8\p\c\0\p\2\p\m\n\u\x\m\v\p\q\7\s\f\q\j\s\e\n\n\4\o\f\3\u\h\1\e\0\y\4\y\0\t\t\7\o\w\m\f\3\r\f\q\r\r\b\n\v\9\9 ]] 01:00:57.344 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:00:57.344 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 01:00:57.344 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 01:00:57.344 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 01:00:57.344 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:00:57.344 05:59:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:00:57.344 [2024-12-09 05:59:51.917451] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:57.344 [2024-12-09 05:59:51.917523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60209 ] 01:00:57.603 [2024-12-09 05:59:52.071014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:57.603 [2024-12-09 05:59:52.111683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:57.603 [2024-12-09 05:59:52.156525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:57.603  [2024-12-09T05:59:52.450Z] Copying: 512/512 [B] (average 500 kBps) 01:00:57.863 01:00:57.863 05:59:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n8laz0jdkskxc75wo97k3s2nk4qa8mmmw7u6ert5k4ttk46rkgykf2bdfwm62gknajwxpz3opdh24pya8xhlai7674j7060h753hr2ufyp57t4wrv5bncdhq3m93co5ovicawc8tx3zye3zl0fx24lj2504iwbcjieg2xw5mnjt8u2hsvxte8y9ijdg1mrae4gyx9itewwu9zuqfwiw39nrfkxtsbpt50osrgt5i1u2um0b99urydlkd5ynw24ozhpjte9iyrckavzl5yqrbsmeh87w6w1o8miv7oqs9cw4db65m8yecfhq3jrvjlr78cmm4mj5zw2rdx9o4r3wpiohbr1kcu7ik57u8wqucl7ulevj6ywy0tvrcan8s21tvi6b28a2te8n4hzwckrrkhzjgm4jt60ownbghszac4ndq6qsn0s8uexyr983ag8aqwyfzipr2pvm0eggxgy2p0tao3g720xj8201a4bnq68u33hagtho3di0nsxzx5i8t == \n\8\l\a\z\0\j\d\k\s\k\x\c\7\5\w\o\9\7\k\3\s\2\n\k\4\q\a\8\m\m\m\w\7\u\6\e\r\t\5\k\4\t\t\k\4\6\r\k\g\y\k\f\2\b\d\f\w\m\6\2\g\k\n\a\j\w\x\p\z\3\o\p\d\h\2\4\p\y\a\8\x\h\l\a\i\7\6\7\4\j\7\0\6\0\h\7\5\3\h\r\2\u\f\y\p\5\7\t\4\w\r\v\5\b\n\c\d\h\q\3\m\9\3\c\o\5\o\v\i\c\a\w\c\8\t\x\3\z\y\e\3\z\l\0\f\x\2\4\l\j\2\5\0\4\i\w\b\c\j\i\e\g\2\x\w\5\m\n\j\t\8\u\2\h\s\v\x\t\e\8\y\9\i\j\d\g\1\m\r\a\e\4\g\y\x\9\i\t\e\w\w\u\9\z\u\q\f\w\i\w\3\9\n\r\f\k\x\t\s\b\p\t\5\0\o\s\r\g\t\5\i\1\u\2\u\m\0\b\9\9\u\r\y\d\l\k\d\5\y\n\w\2\4\o\z\h\p\j\t\e\9\i\y\r\c\k\a\v\z\l\5\y\q\r\b\s\m\e\h\8\7\w\6\w\1\o\8\m\i\v\7\o\q\s\9\c\w\4\d\b\6\5\m\8\y\e\c\f\h\q\3\j\r\v\j\l\r\7\8\c\m\m\4\m\j\5\z\w\2\r\d\x\9\o\4\r\3\w\p\i\o\h\b\r\1\k\c\u\7\i\k\5\7\u\8\w\q\u\c\l\7\u\l\e\v\j\6\y\w\y\0\t\v\r\c\a\n\8\s\2\1\t\v\i\6\b\2\8\a\2\t\e\8\n\4\h\z\w\c\k\r\r\k\h\z\j\g\m\4\j\t\6\0\o\w\n\b\g\h\s\z\a\c\4\n\d\q\6\q\s\n\0\s\8\u\e\x\y\r\9\8\3\a\g\8\a\q\w\y\f\z\i\p\r\2\p\v\m\0\e\g\g\x\g\y\2\p\0\t\a\o\3\g\7\2\0\x\j\8\2\0\1\a\4\b\n\q\6\8\u\3\3\h\a\g\t\h\o\3\d\i\0\n\s\x\z\x\5\i\8\t ]] 01:00:57.863 05:59:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:00:57.863 05:59:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:00:57.863 [2024-12-09 05:59:52.388482] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:57.863 [2024-12-09 05:59:52.388543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60213 ] 01:00:58.123 [2024-12-09 05:59:52.539767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:58.123 [2024-12-09 05:59:52.583338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:58.123 [2024-12-09 05:59:52.627085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:58.123  [2024-12-09T05:59:52.970Z] Copying: 512/512 [B] (average 500 kBps) 01:00:58.383 01:00:58.384 05:59:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n8laz0jdkskxc75wo97k3s2nk4qa8mmmw7u6ert5k4ttk46rkgykf2bdfwm62gknajwxpz3opdh24pya8xhlai7674j7060h753hr2ufyp57t4wrv5bncdhq3m93co5ovicawc8tx3zye3zl0fx24lj2504iwbcjieg2xw5mnjt8u2hsvxte8y9ijdg1mrae4gyx9itewwu9zuqfwiw39nrfkxtsbpt50osrgt5i1u2um0b99urydlkd5ynw24ozhpjte9iyrckavzl5yqrbsmeh87w6w1o8miv7oqs9cw4db65m8yecfhq3jrvjlr78cmm4mj5zw2rdx9o4r3wpiohbr1kcu7ik57u8wqucl7ulevj6ywy0tvrcan8s21tvi6b28a2te8n4hzwckrrkhzjgm4jt60ownbghszac4ndq6qsn0s8uexyr983ag8aqwyfzipr2pvm0eggxgy2p0tao3g720xj8201a4bnq68u33hagtho3di0nsxzx5i8t == \n\8\l\a\z\0\j\d\k\s\k\x\c\7\5\w\o\9\7\k\3\s\2\n\k\4\q\a\8\m\m\m\w\7\u\6\e\r\t\5\k\4\t\t\k\4\6\r\k\g\y\k\f\2\b\d\f\w\m\6\2\g\k\n\a\j\w\x\p\z\3\o\p\d\h\2\4\p\y\a\8\x\h\l\a\i\7\6\7\4\j\7\0\6\0\h\7\5\3\h\r\2\u\f\y\p\5\7\t\4\w\r\v\5\b\n\c\d\h\q\3\m\9\3\c\o\5\o\v\i\c\a\w\c\8\t\x\3\z\y\e\3\z\l\0\f\x\2\4\l\j\2\5\0\4\i\w\b\c\j\i\e\g\2\x\w\5\m\n\j\t\8\u\2\h\s\v\x\t\e\8\y\9\i\j\d\g\1\m\r\a\e\4\g\y\x\9\i\t\e\w\w\u\9\z\u\q\f\w\i\w\3\9\n\r\f\k\x\t\s\b\p\t\5\0\o\s\r\g\t\5\i\1\u\2\u\m\0\b\9\9\u\r\y\d\l\k\d\5\y\n\w\2\4\o\z\h\p\j\t\e\9\i\y\r\c\k\a\v\z\l\5\y\q\r\b\s\m\e\h\8\7\w\6\w\1\o\8\m\i\v\7\o\q\s\9\c\w\4\d\b\6\5\m\8\y\e\c\f\h\q\3\j\r\v\j\l\r\7\8\c\m\m\4\m\j\5\z\w\2\r\d\x\9\o\4\r\3\w\p\i\o\h\b\r\1\k\c\u\7\i\k\5\7\u\8\w\q\u\c\l\7\u\l\e\v\j\6\y\w\y\0\t\v\r\c\a\n\8\s\2\1\t\v\i\6\b\2\8\a\2\t\e\8\n\4\h\z\w\c\k\r\r\k\h\z\j\g\m\4\j\t\6\0\o\w\n\b\g\h\s\z\a\c\4\n\d\q\6\q\s\n\0\s\8\u\e\x\y\r\9\8\3\a\g\8\a\q\w\y\f\z\i\p\r\2\p\v\m\0\e\g\g\x\g\y\2\p\0\t\a\o\3\g\7\2\0\x\j\8\2\0\1\a\4\b\n\q\6\8\u\3\3\h\a\g\t\h\o\3\d\i\0\n\s\x\z\x\5\i\8\t ]] 01:00:58.384 05:59:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:00:58.384 05:59:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:00:58.384 [2024-12-09 05:59:52.849798] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:58.384 [2024-12-09 05:59:52.849866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60227 ] 01:00:58.643 [2024-12-09 05:59:53.001823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:58.643 [2024-12-09 05:59:53.046399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:58.643 [2024-12-09 05:59:53.089313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:58.643  [2024-12-09T05:59:53.488Z] Copying: 512/512 [B] (average 125 kBps) 01:00:58.901 01:00:58.902 05:59:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n8laz0jdkskxc75wo97k3s2nk4qa8mmmw7u6ert5k4ttk46rkgykf2bdfwm62gknajwxpz3opdh24pya8xhlai7674j7060h753hr2ufyp57t4wrv5bncdhq3m93co5ovicawc8tx3zye3zl0fx24lj2504iwbcjieg2xw5mnjt8u2hsvxte8y9ijdg1mrae4gyx9itewwu9zuqfwiw39nrfkxtsbpt50osrgt5i1u2um0b99urydlkd5ynw24ozhpjte9iyrckavzl5yqrbsmeh87w6w1o8miv7oqs9cw4db65m8yecfhq3jrvjlr78cmm4mj5zw2rdx9o4r3wpiohbr1kcu7ik57u8wqucl7ulevj6ywy0tvrcan8s21tvi6b28a2te8n4hzwckrrkhzjgm4jt60ownbghszac4ndq6qsn0s8uexyr983ag8aqwyfzipr2pvm0eggxgy2p0tao3g720xj8201a4bnq68u33hagtho3di0nsxzx5i8t == \n\8\l\a\z\0\j\d\k\s\k\x\c\7\5\w\o\9\7\k\3\s\2\n\k\4\q\a\8\m\m\m\w\7\u\6\e\r\t\5\k\4\t\t\k\4\6\r\k\g\y\k\f\2\b\d\f\w\m\6\2\g\k\n\a\j\w\x\p\z\3\o\p\d\h\2\4\p\y\a\8\x\h\l\a\i\7\6\7\4\j\7\0\6\0\h\7\5\3\h\r\2\u\f\y\p\5\7\t\4\w\r\v\5\b\n\c\d\h\q\3\m\9\3\c\o\5\o\v\i\c\a\w\c\8\t\x\3\z\y\e\3\z\l\0\f\x\2\4\l\j\2\5\0\4\i\w\b\c\j\i\e\g\2\x\w\5\m\n\j\t\8\u\2\h\s\v\x\t\e\8\y\9\i\j\d\g\1\m\r\a\e\4\g\y\x\9\i\t\e\w\w\u\9\z\u\q\f\w\i\w\3\9\n\r\f\k\x\t\s\b\p\t\5\0\o\s\r\g\t\5\i\1\u\2\u\m\0\b\9\9\u\r\y\d\l\k\d\5\y\n\w\2\4\o\z\h\p\j\t\e\9\i\y\r\c\k\a\v\z\l\5\y\q\r\b\s\m\e\h\8\7\w\6\w\1\o\8\m\i\v\7\o\q\s\9\c\w\4\d\b\6\5\m\8\y\e\c\f\h\q\3\j\r\v\j\l\r\7\8\c\m\m\4\m\j\5\z\w\2\r\d\x\9\o\4\r\3\w\p\i\o\h\b\r\1\k\c\u\7\i\k\5\7\u\8\w\q\u\c\l\7\u\l\e\v\j\6\y\w\y\0\t\v\r\c\a\n\8\s\2\1\t\v\i\6\b\2\8\a\2\t\e\8\n\4\h\z\w\c\k\r\r\k\h\z\j\g\m\4\j\t\6\0\o\w\n\b\g\h\s\z\a\c\4\n\d\q\6\q\s\n\0\s\8\u\e\x\y\r\9\8\3\a\g\8\a\q\w\y\f\z\i\p\r\2\p\v\m\0\e\g\g\x\g\y\2\p\0\t\a\o\3\g\7\2\0\x\j\8\2\0\1\a\4\b\n\q\6\8\u\3\3\h\a\g\t\h\o\3\d\i\0\n\s\x\z\x\5\i\8\t ]] 01:00:58.902 05:59:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:00:58.902 05:59:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:00:58.902 [2024-12-09 05:59:53.325970] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:58.902 [2024-12-09 05:59:53.326204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60232 ] 01:00:58.902 [2024-12-09 05:59:53.477796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:59.160 [2024-12-09 05:59:53.519986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:59.160 [2024-12-09 05:59:53.566005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:59.160  [2024-12-09T05:59:53.747Z] Copying: 512/512 [B] (average 250 kBps) 01:00:59.160 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n8laz0jdkskxc75wo97k3s2nk4qa8mmmw7u6ert5k4ttk46rkgykf2bdfwm62gknajwxpz3opdh24pya8xhlai7674j7060h753hr2ufyp57t4wrv5bncdhq3m93co5ovicawc8tx3zye3zl0fx24lj2504iwbcjieg2xw5mnjt8u2hsvxte8y9ijdg1mrae4gyx9itewwu9zuqfwiw39nrfkxtsbpt50osrgt5i1u2um0b99urydlkd5ynw24ozhpjte9iyrckavzl5yqrbsmeh87w6w1o8miv7oqs9cw4db65m8yecfhq3jrvjlr78cmm4mj5zw2rdx9o4r3wpiohbr1kcu7ik57u8wqucl7ulevj6ywy0tvrcan8s21tvi6b28a2te8n4hzwckrrkhzjgm4jt60ownbghszac4ndq6qsn0s8uexyr983ag8aqwyfzipr2pvm0eggxgy2p0tao3g720xj8201a4bnq68u33hagtho3di0nsxzx5i8t == \n\8\l\a\z\0\j\d\k\s\k\x\c\7\5\w\o\9\7\k\3\s\2\n\k\4\q\a\8\m\m\m\w\7\u\6\e\r\t\5\k\4\t\t\k\4\6\r\k\g\y\k\f\2\b\d\f\w\m\6\2\g\k\n\a\j\w\x\p\z\3\o\p\d\h\2\4\p\y\a\8\x\h\l\a\i\7\6\7\4\j\7\0\6\0\h\7\5\3\h\r\2\u\f\y\p\5\7\t\4\w\r\v\5\b\n\c\d\h\q\3\m\9\3\c\o\5\o\v\i\c\a\w\c\8\t\x\3\z\y\e\3\z\l\0\f\x\2\4\l\j\2\5\0\4\i\w\b\c\j\i\e\g\2\x\w\5\m\n\j\t\8\u\2\h\s\v\x\t\e\8\y\9\i\j\d\g\1\m\r\a\e\4\g\y\x\9\i\t\e\w\w\u\9\z\u\q\f\w\i\w\3\9\n\r\f\k\x\t\s\b\p\t\5\0\o\s\r\g\t\5\i\1\u\2\u\m\0\b\9\9\u\r\y\d\l\k\d\5\y\n\w\2\4\o\z\h\p\j\t\e\9\i\y\r\c\k\a\v\z\l\5\y\q\r\b\s\m\e\h\8\7\w\6\w\1\o\8\m\i\v\7\o\q\s\9\c\w\4\d\b\6\5\m\8\y\e\c\f\h\q\3\j\r\v\j\l\r\7\8\c\m\m\4\m\j\5\z\w\2\r\d\x\9\o\4\r\3\w\p\i\o\h\b\r\1\k\c\u\7\i\k\5\7\u\8\w\q\u\c\l\7\u\l\e\v\j\6\y\w\y\0\t\v\r\c\a\n\8\s\2\1\t\v\i\6\b\2\8\a\2\t\e\8\n\4\h\z\w\c\k\r\r\k\h\z\j\g\m\4\j\t\6\0\o\w\n\b\g\h\s\z\a\c\4\n\d\q\6\q\s\n\0\s\8\u\e\x\y\r\9\8\3\a\g\8\a\q\w\y\f\z\i\p\r\2\p\v\m\0\e\g\g\x\g\y\2\p\0\t\a\o\3\g\7\2\0\x\j\8\2\0\1\a\4\b\n\q\6\8\u\3\3\h\a\g\t\h\o\3\d\i\0\n\s\x\z\x\5\i\8\t ]] 01:00:59.419 01:00:59.419 real 0m3.807s 01:00:59.419 user 0m1.962s 01:00:59.419 sys 0m1.864s 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 01:00:59.419 ************************************ 01:00:59.419 END TEST dd_flags_misc 01:00:59.419 ************************************ 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 01:00:59.419 * Second test run, disabling liburing, forcing AIO 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:00:59.419 ************************************ 01:00:59.419 START TEST dd_flag_append_forced_aio 01:00:59.419 ************************************ 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=xmr9z79v9zu3p43pvoq2hdk1gyjcyy7z 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=wu0fmvx5i53trqr77w1223ggsntix8jr 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s xmr9z79v9zu3p43pvoq2hdk1gyjcyy7z 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s wu0fmvx5i53trqr77w1223ggsntix8jr 01:00:59.419 05:59:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 01:00:59.419 [2024-12-09 05:59:53.898127] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:59.419 [2024-12-09 05:59:53.898200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 01:00:59.677 [2024-12-09 05:59:54.045985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:59.677 [2024-12-09 05:59:54.094551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:59.677 [2024-12-09 05:59:54.141148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:00:59.677  [2024-12-09T05:59:54.523Z] Copying: 32/32 [B] (average 31 kBps) 01:00:59.936 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ wu0fmvx5i53trqr77w1223ggsntix8jrxmr9z79v9zu3p43pvoq2hdk1gyjcyy7z == \w\u\0\f\m\v\x\5\i\5\3\t\r\q\r\7\7\w\1\2\2\3\g\g\s\n\t\i\x\8\j\r\x\m\r\9\z\7\9\v\9\z\u\3\p\4\3\p\v\o\q\2\h\d\k\1\g\y\j\c\y\y\7\z ]] 01:00:59.936 01:00:59.936 real 0m0.520s 01:00:59.936 user 0m0.259s 01:00:59.936 sys 0m0.141s 01:00:59.936 ************************************ 01:00:59.936 END TEST dd_flag_append_forced_aio 01:00:59.936 ************************************ 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:00:59.936 ************************************ 01:00:59.936 START TEST dd_flag_directory_forced_aio 01:00:59.936 ************************************ 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:00:59.936 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:00:59.937 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:00:59.937 [2024-12-09 05:59:54.493752] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:00:59.937 [2024-12-09 05:59:54.493823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60287 ] 01:01:00.196 [2024-12-09 05:59:54.643088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:00.196 [2024-12-09 05:59:54.694706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:00.196 [2024-12-09 05:59:54.741682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:00.196 [2024-12-09 05:59:54.771908] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:01:00.196 [2024-12-09 05:59:54.771952] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:01:00.196 [2024-12-09 05:59:54.771963] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:00.456 [2024-12-09 05:59:54.867771] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:00.456 05:59:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:01:00.456 [2024-12-09 05:59:54.979455] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:00.456 [2024-12-09 05:59:54.979643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 01:01:00.717 [2024-12-09 05:59:55.129333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:00.717 [2024-12-09 05:59:55.175346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:00.717 [2024-12-09 05:59:55.220639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:00.717 [2024-12-09 05:59:55.250413] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:01:00.717 [2024-12-09 05:59:55.250453] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:01:00.717 [2024-12-09 05:59:55.250465] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:00.975 [2024-12-09 05:59:55.346001] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:00.975 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 01:01:00.975 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:00.975 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 01:01:00.975 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 01:01:00.975 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:00.976 01:01:00.976 real 0m0.969s 01:01:00.976 user 0m0.485s 01:01:00.976 sys 0m0.275s 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:01:00.976 ************************************ 01:01:00.976 END TEST dd_flag_directory_forced_aio 01:01:00.976 ************************************ 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:01:00.976 ************************************ 01:01:00.976 START TEST dd_flag_nofollow_forced_aio 01:01:00.976 ************************************ 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:00.976 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:00.976 [2024-12-09 05:59:55.550332] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:00.976 [2024-12-09 05:59:55.550399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60325 ] 01:01:01.234 [2024-12-09 05:59:55.699015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:01.234 [2024-12-09 05:59:55.750356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:01.234 [2024-12-09 05:59:55.797335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:01.494 [2024-12-09 05:59:55.827273] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:01:01.494 [2024-12-09 05:59:55.827312] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:01:01.494 [2024-12-09 05:59:55.827325] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:01.494 [2024-12-09 05:59:55.923418] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:01.494 05:59:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:01:01.494 [2024-12-09 05:59:56.038851] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:01.494 [2024-12-09 05:59:56.039050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60340 ] 01:01:01.754 [2024-12-09 05:59:56.192233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:01.754 [2024-12-09 05:59:56.234797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:01.754 [2024-12-09 05:59:56.281111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:01.754 [2024-12-09 05:59:56.310994] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:01:01.754 [2024-12-09 05:59:56.311266] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:01:01.754 [2024-12-09 05:59:56.311418] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:02.013 [2024-12-09 05:59:56.406708] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:01:02.013 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:02.013 [2024-12-09 05:59:56.529189] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:02.013 [2024-12-09 05:59:56.529274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60342 ] 01:01:02.274 [2024-12-09 05:59:56.679202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:02.274 [2024-12-09 05:59:56.725513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:02.274 [2024-12-09 05:59:56.770973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:02.274  [2024-12-09T05:59:57.121Z] Copying: 512/512 [B] (average 500 kBps) 01:01:02.534 01:01:02.534 ************************************ 01:01:02.534 END TEST dd_flag_nofollow_forced_aio 01:01:02.534 ************************************ 01:01:02.534 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ mtjna40oitzsirofw6b37zju7wgvr9agojxiapxgbpvcgbtqvo14stjg0pw9y4tucrugl1lq6b6yv6fxj3vi5xydqdi1rvxefy20bv9frb2oxrpqnwfsh9ba7lke79bm7xj14ndv6sejezo8avib3kyul15qr97d6c9rdbrsw98kea16ofmqcnsljdxglmtsb5yw3zcc5jvkkvwa81zsv82g3cjluw78os9hvhziqjoluyk9kqt39jkcby6d5xg6etypa704rj5yfkre7bmiui1j9zsoml5268ft8h00tzdx1ndhgam8tuw9cjan3lf43j6s928am0yg2sgrbp0tcsl47s11vlhzveap3vbnys8nk0hn622ksij9q69a116taiyibnbz9xj60yh74rz8yro3k4qdw33f1tt83hc2mivoy05huqbbltvxq4onydtbwx6yfpmgws5i2pn02xbpokbrlxc42ux3z2bmrz7j4bzard6dzwyckxmd8i155irl == \m\t\j\n\a\4\0\o\i\t\z\s\i\r\o\f\w\6\b\3\7\z\j\u\7\w\g\v\r\9\a\g\o\j\x\i\a\p\x\g\b\p\v\c\g\b\t\q\v\o\1\4\s\t\j\g\0\p\w\9\y\4\t\u\c\r\u\g\l\1\l\q\6\b\6\y\v\6\f\x\j\3\v\i\5\x\y\d\q\d\i\1\r\v\x\e\f\y\2\0\b\v\9\f\r\b\2\o\x\r\p\q\n\w\f\s\h\9\b\a\7\l\k\e\7\9\b\m\7\x\j\1\4\n\d\v\6\s\e\j\e\z\o\8\a\v\i\b\3\k\y\u\l\1\5\q\r\9\7\d\6\c\9\r\d\b\r\s\w\9\8\k\e\a\1\6\o\f\m\q\c\n\s\l\j\d\x\g\l\m\t\s\b\5\y\w\3\z\c\c\5\j\v\k\k\v\w\a\8\1\z\s\v\8\2\g\3\c\j\l\u\w\7\8\o\s\9\h\v\h\z\i\q\j\o\l\u\y\k\9\k\q\t\3\9\j\k\c\b\y\6\d\5\x\g\6\e\t\y\p\a\7\0\4\r\j\5\y\f\k\r\e\7\b\m\i\u\i\1\j\9\z\s\o\m\l\5\2\6\8\f\t\8\h\0\0\t\z\d\x\1\n\d\h\g\a\m\8\t\u\w\9\c\j\a\n\3\l\f\4\3\j\6\s\9\2\8\a\m\0\y\g\2\s\g\r\b\p\0\t\c\s\l\4\7\s\1\1\v\l\h\z\v\e\a\p\3\v\b\n\y\s\8\n\k\0\h\n\6\2\2\k\s\i\j\9\q\6\9\a\1\1\6\t\a\i\y\i\b\n\b\z\9\x\j\6\0\y\h\7\4\r\z\8\y\r\o\3\k\4\q\d\w\3\3\f\1\t\t\8\3\h\c\2\m\i\v\o\y\0\5\h\u\q\b\b\l\t\v\x\q\4\o\n\y\d\t\b\w\x\6\y\f\p\m\g\w\s\5\i\2\p\n\0\2\x\b\p\o\k\b\r\l\x\c\4\2\u\x\3\z\2\b\m\r\z\7\j\4\b\z\a\r\d\6\d\z\w\y\c\k\x\m\d\8\i\1\5\5\i\r\l ]] 01:01:02.534 01:01:02.534 real 0m1.505s 01:01:02.534 user 0m0.757s 01:01:02.534 sys 0m0.417s 01:01:02.534 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:02.534 05:59:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:01:02.534 ************************************ 01:01:02.534 START TEST dd_flag_noatime_forced_aio 01:01:02.534 ************************************ 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733723996 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733723996 01:01:02.534 05:59:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 01:01:03.918 05:59:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:03.918 [2024-12-09 05:59:58.160054] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:03.918 [2024-12-09 05:59:58.160140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60388 ] 01:01:03.919 [2024-12-09 05:59:58.310356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:03.919 [2024-12-09 05:59:58.359106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:03.919 [2024-12-09 05:59:58.405789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:03.919  [2024-12-09T05:59:58.765Z] Copying: 512/512 [B] (average 500 kBps) 01:01:04.178 01:01:04.178 05:59:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:01:04.178 05:59:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733723996 )) 01:01:04.178 05:59:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:04.178 05:59:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733723996 )) 01:01:04.179 05:59:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:04.179 [2024-12-09 05:59:58.683118] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:04.179 [2024-12-09 05:59:58.683197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60394 ] 01:01:04.438 [2024-12-09 05:59:58.835614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:04.438 [2024-12-09 05:59:58.876952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:04.438 [2024-12-09 05:59:58.921774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:04.438  [2024-12-09T05:59:59.285Z] Copying: 512/512 [B] (average 500 kBps) 01:01:04.699 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:01:04.699 ************************************ 01:01:04.699 END TEST dd_flag_noatime_forced_aio 01:01:04.699 ************************************ 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733723998 )) 01:01:04.699 01:01:04.699 real 0m2.076s 01:01:04.699 user 0m0.548s 01:01:04.699 sys 0m0.288s 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:01:04.699 ************************************ 01:01:04.699 START TEST dd_flags_misc_forced_aio 01:01:04.699 ************************************ 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:01:04.699 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:01:04.958 [2024-12-09 05:59:59.292786] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:04.958 [2024-12-09 05:59:59.292860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60426 ] 01:01:04.958 [2024-12-09 05:59:59.444633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:04.958 [2024-12-09 05:59:59.493133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:04.958 [2024-12-09 05:59:59.539886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:05.218  [2024-12-09T05:59:59.805Z] Copying: 512/512 [B] (average 500 kBps) 01:01:05.218 01:01:05.218 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wl6s5vphugy693iukwxe5qoyt5itdf81k0wfezbsnaepsycx23evegkli7zgs99pqxjopzjr4c23no9xwpbip0yoxq20l23g63j0wyqkc4lqs4o9jn9n58ov1m5gbslmlxbqiphrgcj6q3z1u4016cc92q1tzp3gkw8f2ripny80i9cmhjlo2u29awvxlsnt723w0mb3aulks6ayyhuqbpf31bynttkuqvc0sife1qjpybous3lspncjfbzo2xknlkc1a5p3rntsv69mcj36j5655dt55wgnphlzvbupqiyr5g55irly395vyky51imbdoveaxiq16cb7osaga3xe58xozlb20bvaxcris5qfgniwvowwa8h09u9fp853z73orxurccq3evrvnobjsr3avpu9mi5fe1jx6xwt1tdx8xqrx6o1vdd1bszz9fjpss03tkukmpzb9go4ni1fa4vy70ekrz9zwlo2unsg9049uzwowvmvox44467wnurklbo == \w\l\6\s\5\v\p\h\u\g\y\6\9\3\i\u\k\w\x\e\5\q\o\y\t\5\i\t\d\f\8\1\k\0\w\f\e\z\b\s\n\a\e\p\s\y\c\x\2\3\e\v\e\g\k\l\i\7\z\g\s\9\9\p\q\x\j\o\p\z\j\r\4\c\2\3\n\o\9\x\w\p\b\i\p\0\y\o\x\q\2\0\l\2\3\g\6\3\j\0\w\y\q\k\c\4\l\q\s\4\o\9\j\n\9\n\5\8\o\v\1\m\5\g\b\s\l\m\l\x\b\q\i\p\h\r\g\c\j\6\q\3\z\1\u\4\0\1\6\c\c\9\2\q\1\t\z\p\3\g\k\w\8\f\2\r\i\p\n\y\8\0\i\9\c\m\h\j\l\o\2\u\2\9\a\w\v\x\l\s\n\t\7\2\3\w\0\m\b\3\a\u\l\k\s\6\a\y\y\h\u\q\b\p\f\3\1\b\y\n\t\t\k\u\q\v\c\0\s\i\f\e\1\q\j\p\y\b\o\u\s\3\l\s\p\n\c\j\f\b\z\o\2\x\k\n\l\k\c\1\a\5\p\3\r\n\t\s\v\6\9\m\c\j\3\6\j\5\6\5\5\d\t\5\5\w\g\n\p\h\l\z\v\b\u\p\q\i\y\r\5\g\5\5\i\r\l\y\3\9\5\v\y\k\y\5\1\i\m\b\d\o\v\e\a\x\i\q\1\6\c\b\7\o\s\a\g\a\3\x\e\5\8\x\o\z\l\b\2\0\b\v\a\x\c\r\i\s\5\q\f\g\n\i\w\v\o\w\w\a\8\h\0\9\u\9\f\p\8\5\3\z\7\3\o\r\x\u\r\c\c\q\3\e\v\r\v\n\o\b\j\s\r\3\a\v\p\u\9\m\i\5\f\e\1\j\x\6\x\w\t\1\t\d\x\8\x\q\r\x\6\o\1\v\d\d\1\b\s\z\z\9\f\j\p\s\s\0\3\t\k\u\k\m\p\z\b\9\g\o\4\n\i\1\f\a\4\v\y\7\0\e\k\r\z\9\z\w\l\o\2\u\n\s\g\9\0\4\9\u\z\w\o\w\v\m\v\o\x\4\4\4\6\7\w\n\u\r\k\l\b\o ]] 01:01:05.218 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:01:05.218 05:59:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:01:05.477 [2024-12-09 05:59:59.802598] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:05.477 [2024-12-09 05:59:59.802669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60428 ] 01:01:05.477 [2024-12-09 05:59:59.951511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:05.477 [2024-12-09 05:59:59.998999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:05.477 [2024-12-09 06:00:00.043772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:05.736  [2024-12-09T06:00:00.323Z] Copying: 512/512 [B] (average 500 kBps) 01:01:05.736 01:01:05.736 06:00:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wl6s5vphugy693iukwxe5qoyt5itdf81k0wfezbsnaepsycx23evegkli7zgs99pqxjopzjr4c23no9xwpbip0yoxq20l23g63j0wyqkc4lqs4o9jn9n58ov1m5gbslmlxbqiphrgcj6q3z1u4016cc92q1tzp3gkw8f2ripny80i9cmhjlo2u29awvxlsnt723w0mb3aulks6ayyhuqbpf31bynttkuqvc0sife1qjpybous3lspncjfbzo2xknlkc1a5p3rntsv69mcj36j5655dt55wgnphlzvbupqiyr5g55irly395vyky51imbdoveaxiq16cb7osaga3xe58xozlb20bvaxcris5qfgniwvowwa8h09u9fp853z73orxurccq3evrvnobjsr3avpu9mi5fe1jx6xwt1tdx8xqrx6o1vdd1bszz9fjpss03tkukmpzb9go4ni1fa4vy70ekrz9zwlo2unsg9049uzwowvmvox44467wnurklbo == \w\l\6\s\5\v\p\h\u\g\y\6\9\3\i\u\k\w\x\e\5\q\o\y\t\5\i\t\d\f\8\1\k\0\w\f\e\z\b\s\n\a\e\p\s\y\c\x\2\3\e\v\e\g\k\l\i\7\z\g\s\9\9\p\q\x\j\o\p\z\j\r\4\c\2\3\n\o\9\x\w\p\b\i\p\0\y\o\x\q\2\0\l\2\3\g\6\3\j\0\w\y\q\k\c\4\l\q\s\4\o\9\j\n\9\n\5\8\o\v\1\m\5\g\b\s\l\m\l\x\b\q\i\p\h\r\g\c\j\6\q\3\z\1\u\4\0\1\6\c\c\9\2\q\1\t\z\p\3\g\k\w\8\f\2\r\i\p\n\y\8\0\i\9\c\m\h\j\l\o\2\u\2\9\a\w\v\x\l\s\n\t\7\2\3\w\0\m\b\3\a\u\l\k\s\6\a\y\y\h\u\q\b\p\f\3\1\b\y\n\t\t\k\u\q\v\c\0\s\i\f\e\1\q\j\p\y\b\o\u\s\3\l\s\p\n\c\j\f\b\z\o\2\x\k\n\l\k\c\1\a\5\p\3\r\n\t\s\v\6\9\m\c\j\3\6\j\5\6\5\5\d\t\5\5\w\g\n\p\h\l\z\v\b\u\p\q\i\y\r\5\g\5\5\i\r\l\y\3\9\5\v\y\k\y\5\1\i\m\b\d\o\v\e\a\x\i\q\1\6\c\b\7\o\s\a\g\a\3\x\e\5\8\x\o\z\l\b\2\0\b\v\a\x\c\r\i\s\5\q\f\g\n\i\w\v\o\w\w\a\8\h\0\9\u\9\f\p\8\5\3\z\7\3\o\r\x\u\r\c\c\q\3\e\v\r\v\n\o\b\j\s\r\3\a\v\p\u\9\m\i\5\f\e\1\j\x\6\x\w\t\1\t\d\x\8\x\q\r\x\6\o\1\v\d\d\1\b\s\z\z\9\f\j\p\s\s\0\3\t\k\u\k\m\p\z\b\9\g\o\4\n\i\1\f\a\4\v\y\7\0\e\k\r\z\9\z\w\l\o\2\u\n\s\g\9\0\4\9\u\z\w\o\w\v\m\v\o\x\4\4\4\6\7\w\n\u\r\k\l\b\o ]] 01:01:05.736 06:00:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:01:05.736 06:00:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:01:05.736 [2024-12-09 06:00:00.305715] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:05.736 [2024-12-09 06:00:00.305910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60441 ] 01:01:05.996 [2024-12-09 06:00:00.458051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:05.996 [2024-12-09 06:00:00.500405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:05.996 [2024-12-09 06:00:00.546816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:05.996  [2024-12-09T06:00:00.843Z] Copying: 512/512 [B] (average 125 kBps) 01:01:06.256 01:01:06.257 06:00:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wl6s5vphugy693iukwxe5qoyt5itdf81k0wfezbsnaepsycx23evegkli7zgs99pqxjopzjr4c23no9xwpbip0yoxq20l23g63j0wyqkc4lqs4o9jn9n58ov1m5gbslmlxbqiphrgcj6q3z1u4016cc92q1tzp3gkw8f2ripny80i9cmhjlo2u29awvxlsnt723w0mb3aulks6ayyhuqbpf31bynttkuqvc0sife1qjpybous3lspncjfbzo2xknlkc1a5p3rntsv69mcj36j5655dt55wgnphlzvbupqiyr5g55irly395vyky51imbdoveaxiq16cb7osaga3xe58xozlb20bvaxcris5qfgniwvowwa8h09u9fp853z73orxurccq3evrvnobjsr3avpu9mi5fe1jx6xwt1tdx8xqrx6o1vdd1bszz9fjpss03tkukmpzb9go4ni1fa4vy70ekrz9zwlo2unsg9049uzwowvmvox44467wnurklbo == \w\l\6\s\5\v\p\h\u\g\y\6\9\3\i\u\k\w\x\e\5\q\o\y\t\5\i\t\d\f\8\1\k\0\w\f\e\z\b\s\n\a\e\p\s\y\c\x\2\3\e\v\e\g\k\l\i\7\z\g\s\9\9\p\q\x\j\o\p\z\j\r\4\c\2\3\n\o\9\x\w\p\b\i\p\0\y\o\x\q\2\0\l\2\3\g\6\3\j\0\w\y\q\k\c\4\l\q\s\4\o\9\j\n\9\n\5\8\o\v\1\m\5\g\b\s\l\m\l\x\b\q\i\p\h\r\g\c\j\6\q\3\z\1\u\4\0\1\6\c\c\9\2\q\1\t\z\p\3\g\k\w\8\f\2\r\i\p\n\y\8\0\i\9\c\m\h\j\l\o\2\u\2\9\a\w\v\x\l\s\n\t\7\2\3\w\0\m\b\3\a\u\l\k\s\6\a\y\y\h\u\q\b\p\f\3\1\b\y\n\t\t\k\u\q\v\c\0\s\i\f\e\1\q\j\p\y\b\o\u\s\3\l\s\p\n\c\j\f\b\z\o\2\x\k\n\l\k\c\1\a\5\p\3\r\n\t\s\v\6\9\m\c\j\3\6\j\5\6\5\5\d\t\5\5\w\g\n\p\h\l\z\v\b\u\p\q\i\y\r\5\g\5\5\i\r\l\y\3\9\5\v\y\k\y\5\1\i\m\b\d\o\v\e\a\x\i\q\1\6\c\b\7\o\s\a\g\a\3\x\e\5\8\x\o\z\l\b\2\0\b\v\a\x\c\r\i\s\5\q\f\g\n\i\w\v\o\w\w\a\8\h\0\9\u\9\f\p\8\5\3\z\7\3\o\r\x\u\r\c\c\q\3\e\v\r\v\n\o\b\j\s\r\3\a\v\p\u\9\m\i\5\f\e\1\j\x\6\x\w\t\1\t\d\x\8\x\q\r\x\6\o\1\v\d\d\1\b\s\z\z\9\f\j\p\s\s\0\3\t\k\u\k\m\p\z\b\9\g\o\4\n\i\1\f\a\4\v\y\7\0\e\k\r\z\9\z\w\l\o\2\u\n\s\g\9\0\4\9\u\z\w\o\w\v\m\v\o\x\4\4\4\6\7\w\n\u\r\k\l\b\o ]] 01:01:06.257 06:00:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:01:06.257 06:00:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:01:06.257 [2024-12-09 06:00:00.812579] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:06.257 [2024-12-09 06:00:00.812648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60443 ] 01:01:06.517 [2024-12-09 06:00:00.963954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:06.517 [2024-12-09 06:00:01.010617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:06.517 [2024-12-09 06:00:01.057369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:06.517  [2024-12-09T06:00:01.363Z] Copying: 512/512 [B] (average 250 kBps) 01:01:06.776 01:01:06.776 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wl6s5vphugy693iukwxe5qoyt5itdf81k0wfezbsnaepsycx23evegkli7zgs99pqxjopzjr4c23no9xwpbip0yoxq20l23g63j0wyqkc4lqs4o9jn9n58ov1m5gbslmlxbqiphrgcj6q3z1u4016cc92q1tzp3gkw8f2ripny80i9cmhjlo2u29awvxlsnt723w0mb3aulks6ayyhuqbpf31bynttkuqvc0sife1qjpybous3lspncjfbzo2xknlkc1a5p3rntsv69mcj36j5655dt55wgnphlzvbupqiyr5g55irly395vyky51imbdoveaxiq16cb7osaga3xe58xozlb20bvaxcris5qfgniwvowwa8h09u9fp853z73orxurccq3evrvnobjsr3avpu9mi5fe1jx6xwt1tdx8xqrx6o1vdd1bszz9fjpss03tkukmpzb9go4ni1fa4vy70ekrz9zwlo2unsg9049uzwowvmvox44467wnurklbo == \w\l\6\s\5\v\p\h\u\g\y\6\9\3\i\u\k\w\x\e\5\q\o\y\t\5\i\t\d\f\8\1\k\0\w\f\e\z\b\s\n\a\e\p\s\y\c\x\2\3\e\v\e\g\k\l\i\7\z\g\s\9\9\p\q\x\j\o\p\z\j\r\4\c\2\3\n\o\9\x\w\p\b\i\p\0\y\o\x\q\2\0\l\2\3\g\6\3\j\0\w\y\q\k\c\4\l\q\s\4\o\9\j\n\9\n\5\8\o\v\1\m\5\g\b\s\l\m\l\x\b\q\i\p\h\r\g\c\j\6\q\3\z\1\u\4\0\1\6\c\c\9\2\q\1\t\z\p\3\g\k\w\8\f\2\r\i\p\n\y\8\0\i\9\c\m\h\j\l\o\2\u\2\9\a\w\v\x\l\s\n\t\7\2\3\w\0\m\b\3\a\u\l\k\s\6\a\y\y\h\u\q\b\p\f\3\1\b\y\n\t\t\k\u\q\v\c\0\s\i\f\e\1\q\j\p\y\b\o\u\s\3\l\s\p\n\c\j\f\b\z\o\2\x\k\n\l\k\c\1\a\5\p\3\r\n\t\s\v\6\9\m\c\j\3\6\j\5\6\5\5\d\t\5\5\w\g\n\p\h\l\z\v\b\u\p\q\i\y\r\5\g\5\5\i\r\l\y\3\9\5\v\y\k\y\5\1\i\m\b\d\o\v\e\a\x\i\q\1\6\c\b\7\o\s\a\g\a\3\x\e\5\8\x\o\z\l\b\2\0\b\v\a\x\c\r\i\s\5\q\f\g\n\i\w\v\o\w\w\a\8\h\0\9\u\9\f\p\8\5\3\z\7\3\o\r\x\u\r\c\c\q\3\e\v\r\v\n\o\b\j\s\r\3\a\v\p\u\9\m\i\5\f\e\1\j\x\6\x\w\t\1\t\d\x\8\x\q\r\x\6\o\1\v\d\d\1\b\s\z\z\9\f\j\p\s\s\0\3\t\k\u\k\m\p\z\b\9\g\o\4\n\i\1\f\a\4\v\y\7\0\e\k\r\z\9\z\w\l\o\2\u\n\s\g\9\0\4\9\u\z\w\o\w\v\m\v\o\x\4\4\4\6\7\w\n\u\r\k\l\b\o ]] 01:01:06.776 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:01:06.776 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 01:01:06.776 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:01:06.776 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:01:06.776 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:01:06.777 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:01:06.777 [2024-12-09 06:00:01.333753] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:06.777 [2024-12-09 06:00:01.333818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60456 ] 01:01:07.037 [2024-12-09 06:00:01.481470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:07.037 [2024-12-09 06:00:01.519742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:07.037 [2024-12-09 06:00:01.560930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:07.037  [2024-12-09T06:00:01.883Z] Copying: 512/512 [B] (average 500 kBps) 01:01:07.296 01:01:07.296 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ oaixcxxgusgpu242ya52905wahbvvsek8mxogqxca007sfb3yn2vri2l02qmne7200zklxdnmscmb4o1jvkmxkbz29lbar61mhweb0rg15a52868kzdi73mfrull5psmvfappxkjyotrxntytvfovq6ittmk0p3wqi0blwceiqitdef7tvosswv2o53b795y0kmcz08zxn84nfbk3d8wazchw6ouxzrme5cq8w4w1nm73y7avsytystqqcsbo9opnosq97eftj7l982y18t1ug7o2wd0i17s6ee1tgz39fjxyjxx2dl5rqg3bjs970hnybeaog47g7blfr4ngtlivl9hhyv1hntpj8lda2vgtlo7mgbntnkgkmdv24vnfyq7ro8g59heifx6hab0xmaj0kdk2cosh51o07tre3r6xjdzw9fejfopcbkwyecnoftdq0i53byvnjs2ogxbjg8x18ttnf3c7h4016610biputp4wqjdpd8vdi04645kykqd == \o\a\i\x\c\x\x\g\u\s\g\p\u\2\4\2\y\a\5\2\9\0\5\w\a\h\b\v\v\s\e\k\8\m\x\o\g\q\x\c\a\0\0\7\s\f\b\3\y\n\2\v\r\i\2\l\0\2\q\m\n\e\7\2\0\0\z\k\l\x\d\n\m\s\c\m\b\4\o\1\j\v\k\m\x\k\b\z\2\9\l\b\a\r\6\1\m\h\w\e\b\0\r\g\1\5\a\5\2\8\6\8\k\z\d\i\7\3\m\f\r\u\l\l\5\p\s\m\v\f\a\p\p\x\k\j\y\o\t\r\x\n\t\y\t\v\f\o\v\q\6\i\t\t\m\k\0\p\3\w\q\i\0\b\l\w\c\e\i\q\i\t\d\e\f\7\t\v\o\s\s\w\v\2\o\5\3\b\7\9\5\y\0\k\m\c\z\0\8\z\x\n\8\4\n\f\b\k\3\d\8\w\a\z\c\h\w\6\o\u\x\z\r\m\e\5\c\q\8\w\4\w\1\n\m\7\3\y\7\a\v\s\y\t\y\s\t\q\q\c\s\b\o\9\o\p\n\o\s\q\9\7\e\f\t\j\7\l\9\8\2\y\1\8\t\1\u\g\7\o\2\w\d\0\i\1\7\s\6\e\e\1\t\g\z\3\9\f\j\x\y\j\x\x\2\d\l\5\r\q\g\3\b\j\s\9\7\0\h\n\y\b\e\a\o\g\4\7\g\7\b\l\f\r\4\n\g\t\l\i\v\l\9\h\h\y\v\1\h\n\t\p\j\8\l\d\a\2\v\g\t\l\o\7\m\g\b\n\t\n\k\g\k\m\d\v\2\4\v\n\f\y\q\7\r\o\8\g\5\9\h\e\i\f\x\6\h\a\b\0\x\m\a\j\0\k\d\k\2\c\o\s\h\5\1\o\0\7\t\r\e\3\r\6\x\j\d\z\w\9\f\e\j\f\o\p\c\b\k\w\y\e\c\n\o\f\t\d\q\0\i\5\3\b\y\v\n\j\s\2\o\g\x\b\j\g\8\x\1\8\t\t\n\f\3\c\7\h\4\0\1\6\6\1\0\b\i\p\u\t\p\4\w\q\j\d\p\d\8\v\d\i\0\4\6\4\5\k\y\k\q\d ]] 01:01:07.296 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:01:07.296 06:00:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:01:07.296 [2024-12-09 06:00:01.818248] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:07.296 [2024-12-09 06:00:01.818454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60458 ] 01:01:07.556 [2024-12-09 06:00:01.966836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:07.556 [2024-12-09 06:00:02.005382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:07.556 [2024-12-09 06:00:02.046456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:07.556  [2024-12-09T06:00:02.402Z] Copying: 512/512 [B] (average 500 kBps) 01:01:07.815 01:01:07.815 06:00:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ oaixcxxgusgpu242ya52905wahbvvsek8mxogqxca007sfb3yn2vri2l02qmne7200zklxdnmscmb4o1jvkmxkbz29lbar61mhweb0rg15a52868kzdi73mfrull5psmvfappxkjyotrxntytvfovq6ittmk0p3wqi0blwceiqitdef7tvosswv2o53b795y0kmcz08zxn84nfbk3d8wazchw6ouxzrme5cq8w4w1nm73y7avsytystqqcsbo9opnosq97eftj7l982y18t1ug7o2wd0i17s6ee1tgz39fjxyjxx2dl5rqg3bjs970hnybeaog47g7blfr4ngtlivl9hhyv1hntpj8lda2vgtlo7mgbntnkgkmdv24vnfyq7ro8g59heifx6hab0xmaj0kdk2cosh51o07tre3r6xjdzw9fejfopcbkwyecnoftdq0i53byvnjs2ogxbjg8x18ttnf3c7h4016610biputp4wqjdpd8vdi04645kykqd == \o\a\i\x\c\x\x\g\u\s\g\p\u\2\4\2\y\a\5\2\9\0\5\w\a\h\b\v\v\s\e\k\8\m\x\o\g\q\x\c\a\0\0\7\s\f\b\3\y\n\2\v\r\i\2\l\0\2\q\m\n\e\7\2\0\0\z\k\l\x\d\n\m\s\c\m\b\4\o\1\j\v\k\m\x\k\b\z\2\9\l\b\a\r\6\1\m\h\w\e\b\0\r\g\1\5\a\5\2\8\6\8\k\z\d\i\7\3\m\f\r\u\l\l\5\p\s\m\v\f\a\p\p\x\k\j\y\o\t\r\x\n\t\y\t\v\f\o\v\q\6\i\t\t\m\k\0\p\3\w\q\i\0\b\l\w\c\e\i\q\i\t\d\e\f\7\t\v\o\s\s\w\v\2\o\5\3\b\7\9\5\y\0\k\m\c\z\0\8\z\x\n\8\4\n\f\b\k\3\d\8\w\a\z\c\h\w\6\o\u\x\z\r\m\e\5\c\q\8\w\4\w\1\n\m\7\3\y\7\a\v\s\y\t\y\s\t\q\q\c\s\b\o\9\o\p\n\o\s\q\9\7\e\f\t\j\7\l\9\8\2\y\1\8\t\1\u\g\7\o\2\w\d\0\i\1\7\s\6\e\e\1\t\g\z\3\9\f\j\x\y\j\x\x\2\d\l\5\r\q\g\3\b\j\s\9\7\0\h\n\y\b\e\a\o\g\4\7\g\7\b\l\f\r\4\n\g\t\l\i\v\l\9\h\h\y\v\1\h\n\t\p\j\8\l\d\a\2\v\g\t\l\o\7\m\g\b\n\t\n\k\g\k\m\d\v\2\4\v\n\f\y\q\7\r\o\8\g\5\9\h\e\i\f\x\6\h\a\b\0\x\m\a\j\0\k\d\k\2\c\o\s\h\5\1\o\0\7\t\r\e\3\r\6\x\j\d\z\w\9\f\e\j\f\o\p\c\b\k\w\y\e\c\n\o\f\t\d\q\0\i\5\3\b\y\v\n\j\s\2\o\g\x\b\j\g\8\x\1\8\t\t\n\f\3\c\7\h\4\0\1\6\6\1\0\b\i\p\u\t\p\4\w\q\j\d\p\d\8\v\d\i\0\4\6\4\5\k\y\k\q\d ]] 01:01:07.815 06:00:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:01:07.815 06:00:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:01:07.815 [2024-12-09 06:00:02.302615] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:07.815 [2024-12-09 06:00:02.302695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60471 ] 01:01:08.075 [2024-12-09 06:00:02.452167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:08.075 [2024-12-09 06:00:02.487595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:08.075 [2024-12-09 06:00:02.528768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:08.075  [2024-12-09T06:00:02.923Z] Copying: 512/512 [B] (average 250 kBps) 01:01:08.336 01:01:08.336 06:00:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ oaixcxxgusgpu242ya52905wahbvvsek8mxogqxca007sfb3yn2vri2l02qmne7200zklxdnmscmb4o1jvkmxkbz29lbar61mhweb0rg15a52868kzdi73mfrull5psmvfappxkjyotrxntytvfovq6ittmk0p3wqi0blwceiqitdef7tvosswv2o53b795y0kmcz08zxn84nfbk3d8wazchw6ouxzrme5cq8w4w1nm73y7avsytystqqcsbo9opnosq97eftj7l982y18t1ug7o2wd0i17s6ee1tgz39fjxyjxx2dl5rqg3bjs970hnybeaog47g7blfr4ngtlivl9hhyv1hntpj8lda2vgtlo7mgbntnkgkmdv24vnfyq7ro8g59heifx6hab0xmaj0kdk2cosh51o07tre3r6xjdzw9fejfopcbkwyecnoftdq0i53byvnjs2ogxbjg8x18ttnf3c7h4016610biputp4wqjdpd8vdi04645kykqd == \o\a\i\x\c\x\x\g\u\s\g\p\u\2\4\2\y\a\5\2\9\0\5\w\a\h\b\v\v\s\e\k\8\m\x\o\g\q\x\c\a\0\0\7\s\f\b\3\y\n\2\v\r\i\2\l\0\2\q\m\n\e\7\2\0\0\z\k\l\x\d\n\m\s\c\m\b\4\o\1\j\v\k\m\x\k\b\z\2\9\l\b\a\r\6\1\m\h\w\e\b\0\r\g\1\5\a\5\2\8\6\8\k\z\d\i\7\3\m\f\r\u\l\l\5\p\s\m\v\f\a\p\p\x\k\j\y\o\t\r\x\n\t\y\t\v\f\o\v\q\6\i\t\t\m\k\0\p\3\w\q\i\0\b\l\w\c\e\i\q\i\t\d\e\f\7\t\v\o\s\s\w\v\2\o\5\3\b\7\9\5\y\0\k\m\c\z\0\8\z\x\n\8\4\n\f\b\k\3\d\8\w\a\z\c\h\w\6\o\u\x\z\r\m\e\5\c\q\8\w\4\w\1\n\m\7\3\y\7\a\v\s\y\t\y\s\t\q\q\c\s\b\o\9\o\p\n\o\s\q\9\7\e\f\t\j\7\l\9\8\2\y\1\8\t\1\u\g\7\o\2\w\d\0\i\1\7\s\6\e\e\1\t\g\z\3\9\f\j\x\y\j\x\x\2\d\l\5\r\q\g\3\b\j\s\9\7\0\h\n\y\b\e\a\o\g\4\7\g\7\b\l\f\r\4\n\g\t\l\i\v\l\9\h\h\y\v\1\h\n\t\p\j\8\l\d\a\2\v\g\t\l\o\7\m\g\b\n\t\n\k\g\k\m\d\v\2\4\v\n\f\y\q\7\r\o\8\g\5\9\h\e\i\f\x\6\h\a\b\0\x\m\a\j\0\k\d\k\2\c\o\s\h\5\1\o\0\7\t\r\e\3\r\6\x\j\d\z\w\9\f\e\j\f\o\p\c\b\k\w\y\e\c\n\o\f\t\d\q\0\i\5\3\b\y\v\n\j\s\2\o\g\x\b\j\g\8\x\1\8\t\t\n\f\3\c\7\h\4\0\1\6\6\1\0\b\i\p\u\t\p\4\w\q\j\d\p\d\8\v\d\i\0\4\6\4\5\k\y\k\q\d ]] 01:01:08.336 06:00:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:01:08.336 06:00:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:01:08.336 [2024-12-09 06:00:02.789263] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:08.336 [2024-12-09 06:00:02.789331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60473 ] 01:01:08.596 [2024-12-09 06:00:02.938503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:08.596 [2024-12-09 06:00:02.977322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:08.596 [2024-12-09 06:00:03.018795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:08.596  [2024-12-09T06:00:03.444Z] Copying: 512/512 [B] (average 250 kBps) 01:01:08.857 01:01:08.857 06:00:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ oaixcxxgusgpu242ya52905wahbvvsek8mxogqxca007sfb3yn2vri2l02qmne7200zklxdnmscmb4o1jvkmxkbz29lbar61mhweb0rg15a52868kzdi73mfrull5psmvfappxkjyotrxntytvfovq6ittmk0p3wqi0blwceiqitdef7tvosswv2o53b795y0kmcz08zxn84nfbk3d8wazchw6ouxzrme5cq8w4w1nm73y7avsytystqqcsbo9opnosq97eftj7l982y18t1ug7o2wd0i17s6ee1tgz39fjxyjxx2dl5rqg3bjs970hnybeaog47g7blfr4ngtlivl9hhyv1hntpj8lda2vgtlo7mgbntnkgkmdv24vnfyq7ro8g59heifx6hab0xmaj0kdk2cosh51o07tre3r6xjdzw9fejfopcbkwyecnoftdq0i53byvnjs2ogxbjg8x18ttnf3c7h4016610biputp4wqjdpd8vdi04645kykqd == \o\a\i\x\c\x\x\g\u\s\g\p\u\2\4\2\y\a\5\2\9\0\5\w\a\h\b\v\v\s\e\k\8\m\x\o\g\q\x\c\a\0\0\7\s\f\b\3\y\n\2\v\r\i\2\l\0\2\q\m\n\e\7\2\0\0\z\k\l\x\d\n\m\s\c\m\b\4\o\1\j\v\k\m\x\k\b\z\2\9\l\b\a\r\6\1\m\h\w\e\b\0\r\g\1\5\a\5\2\8\6\8\k\z\d\i\7\3\m\f\r\u\l\l\5\p\s\m\v\f\a\p\p\x\k\j\y\o\t\r\x\n\t\y\t\v\f\o\v\q\6\i\t\t\m\k\0\p\3\w\q\i\0\b\l\w\c\e\i\q\i\t\d\e\f\7\t\v\o\s\s\w\v\2\o\5\3\b\7\9\5\y\0\k\m\c\z\0\8\z\x\n\8\4\n\f\b\k\3\d\8\w\a\z\c\h\w\6\o\u\x\z\r\m\e\5\c\q\8\w\4\w\1\n\m\7\3\y\7\a\v\s\y\t\y\s\t\q\q\c\s\b\o\9\o\p\n\o\s\q\9\7\e\f\t\j\7\l\9\8\2\y\1\8\t\1\u\g\7\o\2\w\d\0\i\1\7\s\6\e\e\1\t\g\z\3\9\f\j\x\y\j\x\x\2\d\l\5\r\q\g\3\b\j\s\9\7\0\h\n\y\b\e\a\o\g\4\7\g\7\b\l\f\r\4\n\g\t\l\i\v\l\9\h\h\y\v\1\h\n\t\p\j\8\l\d\a\2\v\g\t\l\o\7\m\g\b\n\t\n\k\g\k\m\d\v\2\4\v\n\f\y\q\7\r\o\8\g\5\9\h\e\i\f\x\6\h\a\b\0\x\m\a\j\0\k\d\k\2\c\o\s\h\5\1\o\0\7\t\r\e\3\r\6\x\j\d\z\w\9\f\e\j\f\o\p\c\b\k\w\y\e\c\n\o\f\t\d\q\0\i\5\3\b\y\v\n\j\s\2\o\g\x\b\j\g\8\x\1\8\t\t\n\f\3\c\7\h\4\0\1\6\6\1\0\b\i\p\u\t\p\4\w\q\j\d\p\d\8\v\d\i\0\4\6\4\5\k\y\k\q\d ]] 01:01:08.857 01:01:08.857 real 0m4.018s 01:01:08.857 user 0m2.029s 01:01:08.857 sys 0m1.010s 01:01:08.857 06:00:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:08.857 ************************************ 01:01:08.857 END TEST dd_flags_misc_forced_aio 01:01:08.857 ************************************ 01:01:08.857 06:00:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:01:08.857 06:00:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 01:01:08.857 06:00:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:01:08.857 06:00:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:01:08.857 ************************************ 01:01:08.857 END TEST spdk_dd_posix 01:01:08.857 ************************************ 01:01:08.857 01:01:08.857 real 0m18.837s 01:01:08.857 user 0m8.411s 01:01:08.857 sys 0m6.137s 01:01:08.857 06:00:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:08.857 06:00:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:01:08.857 06:00:03 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 01:01:08.857 06:00:03 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:08.857 06:00:03 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:08.857 06:00:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:01:08.857 ************************************ 01:01:08.857 START TEST spdk_dd_malloc 01:01:08.857 ************************************ 01:01:08.857 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 01:01:09.118 * Looking for test storage... 01:01:09.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:09.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:09.118 --rc genhtml_branch_coverage=1 01:01:09.118 --rc genhtml_function_coverage=1 01:01:09.118 --rc genhtml_legend=1 01:01:09.118 --rc geninfo_all_blocks=1 01:01:09.118 --rc geninfo_unexecuted_blocks=1 01:01:09.118 01:01:09.118 ' 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:09.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:09.118 --rc genhtml_branch_coverage=1 01:01:09.118 --rc genhtml_function_coverage=1 01:01:09.118 --rc genhtml_legend=1 01:01:09.118 --rc geninfo_all_blocks=1 01:01:09.118 --rc geninfo_unexecuted_blocks=1 01:01:09.118 01:01:09.118 ' 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:09.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:09.118 --rc genhtml_branch_coverage=1 01:01:09.118 --rc genhtml_function_coverage=1 01:01:09.118 --rc genhtml_legend=1 01:01:09.118 --rc geninfo_all_blocks=1 01:01:09.118 --rc geninfo_unexecuted_blocks=1 01:01:09.118 01:01:09.118 ' 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:09.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:09.118 --rc genhtml_branch_coverage=1 01:01:09.118 --rc genhtml_function_coverage=1 01:01:09.118 --rc genhtml_legend=1 01:01:09.118 --rc geninfo_all_blocks=1 01:01:09.118 --rc geninfo_unexecuted_blocks=1 01:01:09.118 01:01:09.118 ' 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 01:01:09.118 ************************************ 01:01:09.118 START TEST dd_malloc_copy 01:01:09.118 ************************************ 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 01:01:09.118 06:00:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 01:01:09.118 [2024-12-09 06:00:03.698124] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:09.118 [2024-12-09 06:00:03.698187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60557 ] 01:01:09.378 { 01:01:09.378 "subsystems": [ 01:01:09.378 { 01:01:09.378 "subsystem": "bdev", 01:01:09.378 "config": [ 01:01:09.378 { 01:01:09.378 "params": { 01:01:09.378 "block_size": 512, 01:01:09.378 "num_blocks": 1048576, 01:01:09.378 "name": "malloc0" 01:01:09.378 }, 01:01:09.378 "method": "bdev_malloc_create" 01:01:09.378 }, 01:01:09.378 { 01:01:09.378 "params": { 01:01:09.378 "block_size": 512, 01:01:09.378 "num_blocks": 1048576, 01:01:09.378 "name": "malloc1" 01:01:09.378 }, 01:01:09.378 "method": "bdev_malloc_create" 01:01:09.378 }, 01:01:09.378 { 01:01:09.378 "method": "bdev_wait_for_examine" 01:01:09.378 } 01:01:09.378 ] 01:01:09.378 } 01:01:09.378 ] 01:01:09.378 } 01:01:09.378 [2024-12-09 06:00:03.849622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:09.378 [2024-12-09 06:00:03.890470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:09.378 [2024-12-09 06:00:03.933063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:10.757  [2024-12-09T06:00:06.281Z] Copying: 273/512 [MB] (273 MBps) [2024-12-09T06:00:06.540Z] Copying: 512/512 [MB] (average 276 MBps) 01:01:11.953 01:01:11.953 06:00:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 01:01:11.953 06:00:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 01:01:11.953 06:00:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 01:01:11.953 06:00:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 01:01:12.211 [2024-12-09 06:00:06.576178] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:12.211 [2024-12-09 06:00:06.576253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60599 ] 01:01:12.211 { 01:01:12.211 "subsystems": [ 01:01:12.211 { 01:01:12.211 "subsystem": "bdev", 01:01:12.211 "config": [ 01:01:12.211 { 01:01:12.211 "params": { 01:01:12.211 "block_size": 512, 01:01:12.211 "num_blocks": 1048576, 01:01:12.211 "name": "malloc0" 01:01:12.211 }, 01:01:12.211 "method": "bdev_malloc_create" 01:01:12.211 }, 01:01:12.211 { 01:01:12.211 "params": { 01:01:12.211 "block_size": 512, 01:01:12.211 "num_blocks": 1048576, 01:01:12.211 "name": "malloc1" 01:01:12.211 }, 01:01:12.211 "method": "bdev_malloc_create" 01:01:12.211 }, 01:01:12.211 { 01:01:12.211 "method": "bdev_wait_for_examine" 01:01:12.211 } 01:01:12.211 ] 01:01:12.211 } 01:01:12.211 ] 01:01:12.211 } 01:01:12.211 [2024-12-09 06:00:06.725695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:12.211 [2024-12-09 06:00:06.764930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:12.469 [2024-12-09 06:00:06.808119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:13.840  [2024-12-09T06:00:08.993Z] Copying: 277/512 [MB] (277 MBps) [2024-12-09T06:00:09.559Z] Copying: 512/512 [MB] (average 277 MBps) 01:01:14.972 01:01:14.972 01:01:14.972 real 0m5.752s 01:01:14.972 user 0m4.899s 01:01:14.972 sys 0m0.716s 01:01:14.972 06:00:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:14.972 06:00:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 01:01:14.972 ************************************ 01:01:14.972 END TEST dd_malloc_copy 01:01:14.972 ************************************ 01:01:14.972 01:01:14.972 real 0m6.066s 01:01:14.972 user 0m5.066s 01:01:14.972 sys 0m0.875s 01:01:14.972 06:00:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:14.972 06:00:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 01:01:14.972 ************************************ 01:01:14.972 END TEST spdk_dd_malloc 01:01:14.972 ************************************ 01:01:14.972 06:00:09 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 01:01:14.972 06:00:09 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:01:14.972 06:00:09 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:14.972 06:00:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:01:14.972 ************************************ 01:01:14.972 START TEST spdk_dd_bdev_to_bdev 01:01:14.972 ************************************ 01:01:14.972 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 01:01:15.231 * Looking for test storage... 01:01:15.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:15.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:15.231 --rc genhtml_branch_coverage=1 01:01:15.231 --rc genhtml_function_coverage=1 01:01:15.231 --rc genhtml_legend=1 01:01:15.231 --rc geninfo_all_blocks=1 01:01:15.231 --rc geninfo_unexecuted_blocks=1 01:01:15.231 01:01:15.231 ' 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:15.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:15.231 --rc genhtml_branch_coverage=1 01:01:15.231 --rc genhtml_function_coverage=1 01:01:15.231 --rc genhtml_legend=1 01:01:15.231 --rc geninfo_all_blocks=1 01:01:15.231 --rc geninfo_unexecuted_blocks=1 01:01:15.231 01:01:15.231 ' 01:01:15.231 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:15.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:15.232 --rc genhtml_branch_coverage=1 01:01:15.232 --rc genhtml_function_coverage=1 01:01:15.232 --rc genhtml_legend=1 01:01:15.232 --rc geninfo_all_blocks=1 01:01:15.232 --rc geninfo_unexecuted_blocks=1 01:01:15.232 01:01:15.232 ' 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:15.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:15.232 --rc genhtml_branch_coverage=1 01:01:15.232 --rc genhtml_function_coverage=1 01:01:15.232 --rc genhtml_legend=1 01:01:15.232 --rc geninfo_all_blocks=1 01:01:15.232 --rc geninfo_unexecuted_blocks=1 01:01:15.232 01:01:15.232 ' 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:15.232 ************************************ 01:01:15.232 START TEST dd_inflate_file 01:01:15.232 ************************************ 01:01:15.232 06:00:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 01:01:15.491 [2024-12-09 06:00:09.847672] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:15.491 [2024-12-09 06:00:09.847758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60707 ] 01:01:15.491 [2024-12-09 06:00:09.996567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:15.491 [2024-12-09 06:00:10.045500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:15.749 [2024-12-09 06:00:10.086986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:15.749  [2024-12-09T06:00:10.336Z] Copying: 64/64 [MB] (average 1254 MBps) 01:01:15.749 01:01:15.749 01:01:15.749 real 0m0.517s 01:01:15.749 user 0m0.283s 01:01:15.749 sys 0m0.296s 01:01:15.749 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:15.749 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 01:01:15.749 ************************************ 01:01:15.749 END TEST dd_inflate_file 01:01:15.749 ************************************ 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:16.009 ************************************ 01:01:16.009 START TEST dd_copy_to_out_bdev 01:01:16.009 ************************************ 01:01:16.009 06:00:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 01:01:16.009 { 01:01:16.009 "subsystems": [ 01:01:16.009 { 01:01:16.009 "subsystem": "bdev", 01:01:16.009 "config": [ 01:01:16.009 { 01:01:16.009 "params": { 01:01:16.009 "trtype": "pcie", 01:01:16.009 "traddr": "0000:00:10.0", 01:01:16.009 "name": "Nvme0" 01:01:16.009 }, 01:01:16.009 "method": "bdev_nvme_attach_controller" 01:01:16.009 }, 01:01:16.009 { 01:01:16.009 "params": { 01:01:16.009 "trtype": "pcie", 01:01:16.009 "traddr": "0000:00:11.0", 01:01:16.009 "name": "Nvme1" 01:01:16.009 }, 01:01:16.009 "method": "bdev_nvme_attach_controller" 01:01:16.009 }, 01:01:16.009 { 01:01:16.009 "method": "bdev_wait_for_examine" 01:01:16.009 } 01:01:16.009 ] 01:01:16.009 } 01:01:16.009 ] 01:01:16.009 } 01:01:16.009 [2024-12-09 06:00:10.454379] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:16.009 [2024-12-09 06:00:10.454449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60741 ] 01:01:16.268 [2024-12-09 06:00:10.603528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:16.268 [2024-12-09 06:00:10.653350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:16.268 [2024-12-09 06:00:10.699681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:17.648  [2024-12-09T06:00:12.494Z] Copying: 41/64 [MB] (41 MBps) [2024-12-09T06:00:12.753Z] Copying: 64/64 [MB] (average 40 MBps) 01:01:18.166 01:01:18.166 01:01:18.166 real 0m2.233s 01:01:18.166 user 0m2.011s 01:01:18.166 sys 0m1.893s 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:18.166 ************************************ 01:01:18.166 END TEST dd_copy_to_out_bdev 01:01:18.166 ************************************ 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:18.166 ************************************ 01:01:18.166 START TEST dd_offset_magic 01:01:18.166 ************************************ 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:01:18.166 06:00:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:01:18.426 [2024-12-09 06:00:12.771037] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:18.426 [2024-12-09 06:00:12.771282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60786 ] 01:01:18.426 { 01:01:18.426 "subsystems": [ 01:01:18.426 { 01:01:18.426 "subsystem": "bdev", 01:01:18.426 "config": [ 01:01:18.426 { 01:01:18.426 "params": { 01:01:18.426 "trtype": "pcie", 01:01:18.426 "traddr": "0000:00:10.0", 01:01:18.426 "name": "Nvme0" 01:01:18.426 }, 01:01:18.426 "method": "bdev_nvme_attach_controller" 01:01:18.426 }, 01:01:18.426 { 01:01:18.426 "params": { 01:01:18.426 "trtype": "pcie", 01:01:18.426 "traddr": "0000:00:11.0", 01:01:18.426 "name": "Nvme1" 01:01:18.426 }, 01:01:18.426 "method": "bdev_nvme_attach_controller" 01:01:18.426 }, 01:01:18.426 { 01:01:18.426 "method": "bdev_wait_for_examine" 01:01:18.426 } 01:01:18.426 ] 01:01:18.426 } 01:01:18.426 ] 01:01:18.426 } 01:01:18.426 [2024-12-09 06:00:12.920499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:18.426 [2024-12-09 06:00:12.968918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:18.685 [2024-12-09 06:00:13.015995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:18.944  [2024-12-09T06:00:13.531Z] Copying: 65/65 [MB] (average 613 MBps) 01:01:18.944 01:01:18.944 06:00:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 01:01:18.944 06:00:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 01:01:18.944 06:00:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:01:18.944 06:00:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:01:19.203 [2024-12-09 06:00:13.539859] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:19.203 [2024-12-09 06:00:13.539925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60806 ] 01:01:19.203 { 01:01:19.203 "subsystems": [ 01:01:19.203 { 01:01:19.203 "subsystem": "bdev", 01:01:19.203 "config": [ 01:01:19.203 { 01:01:19.203 "params": { 01:01:19.203 "trtype": "pcie", 01:01:19.203 "traddr": "0000:00:10.0", 01:01:19.203 "name": "Nvme0" 01:01:19.203 }, 01:01:19.203 "method": "bdev_nvme_attach_controller" 01:01:19.203 }, 01:01:19.203 { 01:01:19.203 "params": { 01:01:19.203 "trtype": "pcie", 01:01:19.203 "traddr": "0000:00:11.0", 01:01:19.203 "name": "Nvme1" 01:01:19.203 }, 01:01:19.203 "method": "bdev_nvme_attach_controller" 01:01:19.203 }, 01:01:19.203 { 01:01:19.203 "method": "bdev_wait_for_examine" 01:01:19.203 } 01:01:19.203 ] 01:01:19.203 } 01:01:19.203 ] 01:01:19.203 } 01:01:19.203 [2024-12-09 06:00:13.681578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:19.203 [2024-12-09 06:00:13.723954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:19.203 [2024-12-09 06:00:13.766757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:19.463  [2024-12-09T06:00:14.310Z] Copying: 1024/1024 [kB] (average 500 MBps) 01:01:19.723 01:01:19.723 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 01:01:19.723 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 01:01:19.723 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 01:01:19.723 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 01:01:19.723 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 01:01:19.723 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:01:19.723 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:01:19.723 [2024-12-09 06:00:14.149161] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:19.723 [2024-12-09 06:00:14.149712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60822 ] 01:01:19.723 { 01:01:19.723 "subsystems": [ 01:01:19.723 { 01:01:19.723 "subsystem": "bdev", 01:01:19.723 "config": [ 01:01:19.723 { 01:01:19.723 "params": { 01:01:19.723 "trtype": "pcie", 01:01:19.723 "traddr": "0000:00:10.0", 01:01:19.723 "name": "Nvme0" 01:01:19.723 }, 01:01:19.723 "method": "bdev_nvme_attach_controller" 01:01:19.723 }, 01:01:19.723 { 01:01:19.723 "params": { 01:01:19.723 "trtype": "pcie", 01:01:19.723 "traddr": "0000:00:11.0", 01:01:19.723 "name": "Nvme1" 01:01:19.723 }, 01:01:19.723 "method": "bdev_nvme_attach_controller" 01:01:19.723 }, 01:01:19.723 { 01:01:19.723 "method": "bdev_wait_for_examine" 01:01:19.723 } 01:01:19.723 ] 01:01:19.723 } 01:01:19.723 ] 01:01:19.723 } 01:01:19.723 [2024-12-09 06:00:14.303512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:19.986 [2024-12-09 06:00:14.344735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:19.986 [2024-12-09 06:00:14.389447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:20.266  [2024-12-09T06:00:14.853Z] Copying: 65/65 [MB] (average 691 MBps) 01:01:20.266 01:01:20.266 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 01:01:20.534 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 01:01:20.534 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:01:20.534 06:00:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:01:20.534 [2024-12-09 06:00:14.896736] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:20.534 [2024-12-09 06:00:14.896799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60837 ] 01:01:20.534 { 01:01:20.534 "subsystems": [ 01:01:20.534 { 01:01:20.534 "subsystem": "bdev", 01:01:20.534 "config": [ 01:01:20.534 { 01:01:20.534 "params": { 01:01:20.534 "trtype": "pcie", 01:01:20.534 "traddr": "0000:00:10.0", 01:01:20.534 "name": "Nvme0" 01:01:20.534 }, 01:01:20.534 "method": "bdev_nvme_attach_controller" 01:01:20.534 }, 01:01:20.534 { 01:01:20.534 "params": { 01:01:20.534 "trtype": "pcie", 01:01:20.534 "traddr": "0000:00:11.0", 01:01:20.534 "name": "Nvme1" 01:01:20.534 }, 01:01:20.534 "method": "bdev_nvme_attach_controller" 01:01:20.534 }, 01:01:20.534 { 01:01:20.534 "method": "bdev_wait_for_examine" 01:01:20.534 } 01:01:20.534 ] 01:01:20.534 } 01:01:20.534 ] 01:01:20.534 } 01:01:20.534 [2024-12-09 06:00:15.045364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:20.534 [2024-12-09 06:00:15.090051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:20.793 [2024-12-09 06:00:15.134685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:20.793  [2024-12-09T06:00:15.639Z] Copying: 1024/1024 [kB] (average 500 MBps) 01:01:21.052 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 01:01:21.052 01:01:21.052 real 0m2.748s 01:01:21.052 user 0m1.958s 01:01:21.052 sys 0m0.835s 01:01:21.052 ************************************ 01:01:21.052 END TEST dd_offset_magic 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:01:21.052 ************************************ 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:01:21.052 06:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:21.052 [2024-12-09 06:00:15.585372] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:21.052 [2024-12-09 06:00:15.585626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60871 ] 01:01:21.052 { 01:01:21.052 "subsystems": [ 01:01:21.052 { 01:01:21.052 "subsystem": "bdev", 01:01:21.052 "config": [ 01:01:21.052 { 01:01:21.052 "params": { 01:01:21.052 "trtype": "pcie", 01:01:21.052 "traddr": "0000:00:10.0", 01:01:21.052 "name": "Nvme0" 01:01:21.052 }, 01:01:21.052 "method": "bdev_nvme_attach_controller" 01:01:21.052 }, 01:01:21.052 { 01:01:21.052 "params": { 01:01:21.052 "trtype": "pcie", 01:01:21.052 "traddr": "0000:00:11.0", 01:01:21.052 "name": "Nvme1" 01:01:21.052 }, 01:01:21.052 "method": "bdev_nvme_attach_controller" 01:01:21.052 }, 01:01:21.052 { 01:01:21.052 "method": "bdev_wait_for_examine" 01:01:21.052 } 01:01:21.052 ] 01:01:21.052 } 01:01:21.052 ] 01:01:21.052 } 01:01:21.311 [2024-12-09 06:00:15.737905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:21.311 [2024-12-09 06:00:15.784429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:21.311 [2024-12-09 06:00:15.832478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:21.569  [2024-12-09T06:00:16.416Z] Copying: 5120/5120 [kB] (average 1000 MBps) 01:01:21.829 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:01:21.829 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:21.829 [2024-12-09 06:00:16.219710] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:21.829 [2024-12-09 06:00:16.219884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60889 ] 01:01:21.829 { 01:01:21.829 "subsystems": [ 01:01:21.829 { 01:01:21.829 "subsystem": "bdev", 01:01:21.829 "config": [ 01:01:21.829 { 01:01:21.829 "params": { 01:01:21.829 "trtype": "pcie", 01:01:21.829 "traddr": "0000:00:10.0", 01:01:21.829 "name": "Nvme0" 01:01:21.829 }, 01:01:21.829 "method": "bdev_nvme_attach_controller" 01:01:21.829 }, 01:01:21.829 { 01:01:21.829 "params": { 01:01:21.829 "trtype": "pcie", 01:01:21.829 "traddr": "0000:00:11.0", 01:01:21.829 "name": "Nvme1" 01:01:21.829 }, 01:01:21.829 "method": "bdev_nvme_attach_controller" 01:01:21.829 }, 01:01:21.829 { 01:01:21.829 "method": "bdev_wait_for_examine" 01:01:21.829 } 01:01:21.829 ] 01:01:21.829 } 01:01:21.829 ] 01:01:21.829 } 01:01:21.829 [2024-12-09 06:00:16.370183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:22.088 [2024-12-09 06:00:16.414496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:22.088 [2024-12-09 06:00:16.460728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:22.088  [2024-12-09T06:00:16.959Z] Copying: 5120/5120 [kB] (average 555 MBps) 01:01:22.372 01:01:22.372 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 01:01:22.372 ************************************ 01:01:22.372 END TEST spdk_dd_bdev_to_bdev 01:01:22.372 ************************************ 01:01:22.372 01:01:22.372 real 0m7.288s 01:01:22.372 user 0m5.317s 01:01:22.372 sys 0m3.847s 01:01:22.372 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:22.372 06:00:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:22.372 06:00:16 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 01:01:22.372 06:00:16 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 01:01:22.372 06:00:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:22.372 06:00:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:22.372 06:00:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:01:22.372 ************************************ 01:01:22.372 START TEST spdk_dd_uring 01:01:22.372 ************************************ 01:01:22.372 06:00:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 01:01:22.632 * Looking for test storage... 01:01:22.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 01:01:22.632 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:22.633 --rc genhtml_branch_coverage=1 01:01:22.633 --rc genhtml_function_coverage=1 01:01:22.633 --rc genhtml_legend=1 01:01:22.633 --rc geninfo_all_blocks=1 01:01:22.633 --rc geninfo_unexecuted_blocks=1 01:01:22.633 01:01:22.633 ' 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:22.633 --rc genhtml_branch_coverage=1 01:01:22.633 --rc genhtml_function_coverage=1 01:01:22.633 --rc genhtml_legend=1 01:01:22.633 --rc geninfo_all_blocks=1 01:01:22.633 --rc geninfo_unexecuted_blocks=1 01:01:22.633 01:01:22.633 ' 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:22.633 --rc genhtml_branch_coverage=1 01:01:22.633 --rc genhtml_function_coverage=1 01:01:22.633 --rc genhtml_legend=1 01:01:22.633 --rc geninfo_all_blocks=1 01:01:22.633 --rc geninfo_unexecuted_blocks=1 01:01:22.633 01:01:22.633 ' 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:22.633 --rc genhtml_branch_coverage=1 01:01:22.633 --rc genhtml_function_coverage=1 01:01:22.633 --rc genhtml_legend=1 01:01:22.633 --rc geninfo_all_blocks=1 01:01:22.633 --rc geninfo_unexecuted_blocks=1 01:01:22.633 01:01:22.633 ' 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 01:01:22.633 ************************************ 01:01:22.633 START TEST dd_uring_copy 01:01:22.633 ************************************ 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=sbg3syouf3y2dhhr6dibs79xjj6s1mk22fw6di3ek8qno5paapfqs4qjjg396f2supthdte8tyfgarwpbviry5bxfs7g6va9148lx77mzq0gsgpke52o4bkxuuogitwqy0h2rl2nn9so8ybu16se1tvt5t79q6hqdvbjpc8x5u01mcfsp9pvqcdik08jybpuwmy6poyvnaulr4yeu2dxoicq6f4znihxznlxx5qfljjxqt7uocwezly0n23ucijw8974r1uh8fpcvdnyujaw090b5kruxn7n67qayvmizdwm9udra07y93c20l6no0a7u2fb92ln2rqib7v25jvnccifbqq0fv8jnw0ges4xd5xzoajurxtuq94729ey7he9024h144f6rn0yx22hhorgldi8mqsgab8hh1malkjss4ya3dd8umh6in5d9ly31vgy50kg9h6izdgknce39nbt5ka6rm54268yynxaez967x966ktzuk8wvjmk422y3d2tfctockh16aerrjpthjhay6w7zvhaj7sel43cxnvyzynjkew9inwiv4ubb7ongrgeaq11636m0eqzbfkhkatrttig1aqa8jldr5b7ok8yd45ina4ozutbao1wsgqkzocu9y1u42kgx3wy43zgj2oem15bfc9jmkp38rz9ffnv18kejen6v36gsnb0mlbw3ockf7wm0sv3m2mw0ozkbm8n65seglb299bzrw0ymo6zzn928yyqyyuzyi66pdaqidg58usiv1tzvz1amfdoz697zpof16deq3owznw4h1x8s2obs0cmv2i5b7i1zltcnzkn7odzfotcnu243vlivjmqurj8nwck3gf0z57ccbtok0os1oit1mqkrtj6lt1o8dbx949bfzoia7k4dsvl86b8tvqzj6ksja2d65lgw238098y9knoiha25f4jxsotfoobh8e4eey75up8q6rjrtidrk2on63v6j7n09i3imwk9wnodkwb7ca8pz4j5zfwp5v 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo sbg3syouf3y2dhhr6dibs79xjj6s1mk22fw6di3ek8qno5paapfqs4qjjg396f2supthdte8tyfgarwpbviry5bxfs7g6va9148lx77mzq0gsgpke52o4bkxuuogitwqy0h2rl2nn9so8ybu16se1tvt5t79q6hqdvbjpc8x5u01mcfsp9pvqcdik08jybpuwmy6poyvnaulr4yeu2dxoicq6f4znihxznlxx5qfljjxqt7uocwezly0n23ucijw8974r1uh8fpcvdnyujaw090b5kruxn7n67qayvmizdwm9udra07y93c20l6no0a7u2fb92ln2rqib7v25jvnccifbqq0fv8jnw0ges4xd5xzoajurxtuq94729ey7he9024h144f6rn0yx22hhorgldi8mqsgab8hh1malkjss4ya3dd8umh6in5d9ly31vgy50kg9h6izdgknce39nbt5ka6rm54268yynxaez967x966ktzuk8wvjmk422y3d2tfctockh16aerrjpthjhay6w7zvhaj7sel43cxnvyzynjkew9inwiv4ubb7ongrgeaq11636m0eqzbfkhkatrttig1aqa8jldr5b7ok8yd45ina4ozutbao1wsgqkzocu9y1u42kgx3wy43zgj2oem15bfc9jmkp38rz9ffnv18kejen6v36gsnb0mlbw3ockf7wm0sv3m2mw0ozkbm8n65seglb299bzrw0ymo6zzn928yyqyyuzyi66pdaqidg58usiv1tzvz1amfdoz697zpof16deq3owznw4h1x8s2obs0cmv2i5b7i1zltcnzkn7odzfotcnu243vlivjmqurj8nwck3gf0z57ccbtok0os1oit1mqkrtj6lt1o8dbx949bfzoia7k4dsvl86b8tvqzj6ksja2d65lgw238098y9knoiha25f4jxsotfoobh8e4eey75up8q6rjrtidrk2on63v6j7n09i3imwk9wnodkwb7ca8pz4j5zfwp5v 01:01:22.633 06:00:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 01:01:22.892 [2024-12-09 06:00:17.244913] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:22.892 [2024-12-09 06:00:17.244973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60968 ] 01:01:22.892 [2024-12-09 06:00:17.400447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:22.892 [2024-12-09 06:00:17.447387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:23.151 [2024-12-09 06:00:17.492907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:23.717  [2024-12-09T06:00:18.563Z] Copying: 511/511 [MB] (average 1350 MBps) 01:01:23.976 01:01:23.976 06:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 01:01:23.976 06:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 01:01:23.976 06:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:01:23.976 06:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:01:23.976 [2024-12-09 06:00:18.427837] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:23.976 [2024-12-09 06:00:18.427900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60985 ] 01:01:23.976 { 01:01:23.976 "subsystems": [ 01:01:23.976 { 01:01:23.976 "subsystem": "bdev", 01:01:23.976 "config": [ 01:01:23.976 { 01:01:23.976 "params": { 01:01:23.976 "block_size": 512, 01:01:23.976 "num_blocks": 1048576, 01:01:23.976 "name": "malloc0" 01:01:23.976 }, 01:01:23.976 "method": "bdev_malloc_create" 01:01:23.976 }, 01:01:23.976 { 01:01:23.976 "params": { 01:01:23.976 "filename": "/dev/zram1", 01:01:23.976 "name": "uring0" 01:01:23.976 }, 01:01:23.976 "method": "bdev_uring_create" 01:01:23.976 }, 01:01:23.976 { 01:01:23.976 "method": "bdev_wait_for_examine" 01:01:23.976 } 01:01:23.976 ] 01:01:23.976 } 01:01:23.976 ] 01:01:23.976 } 01:01:24.235 [2024-12-09 06:00:18.576185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:24.235 [2024-12-09 06:00:18.618510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:24.235 [2024-12-09 06:00:18.663296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:25.611  [2024-12-09T06:00:21.134Z] Copying: 264/512 [MB] (264 MBps) [2024-12-09T06:00:21.134Z] Copying: 511/512 [MB] (247 MBps) [2024-12-09T06:00:21.134Z] Copying: 512/512 [MB] (average 256 MBps) 01:01:26.547 01:01:26.805 06:00:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 01:01:26.805 06:00:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 01:01:26.805 06:00:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:01:26.805 06:00:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:01:26.805 [2024-12-09 06:00:21.185788] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:26.806 [2024-12-09 06:00:21.186025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61024 ] 01:01:26.806 { 01:01:26.806 "subsystems": [ 01:01:26.806 { 01:01:26.806 "subsystem": "bdev", 01:01:26.806 "config": [ 01:01:26.806 { 01:01:26.806 "params": { 01:01:26.806 "block_size": 512, 01:01:26.806 "num_blocks": 1048576, 01:01:26.806 "name": "malloc0" 01:01:26.806 }, 01:01:26.806 "method": "bdev_malloc_create" 01:01:26.806 }, 01:01:26.806 { 01:01:26.806 "params": { 01:01:26.806 "filename": "/dev/zram1", 01:01:26.806 "name": "uring0" 01:01:26.806 }, 01:01:26.806 "method": "bdev_uring_create" 01:01:26.806 }, 01:01:26.806 { 01:01:26.806 "method": "bdev_wait_for_examine" 01:01:26.806 } 01:01:26.806 ] 01:01:26.806 } 01:01:26.806 ] 01:01:26.806 } 01:01:26.806 [2024-12-09 06:00:21.336516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:26.806 [2024-12-09 06:00:21.382246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:27.065 [2024-12-09 06:00:21.427392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:28.002  [2024-12-09T06:00:23.966Z] Copying: 183/512 [MB] (183 MBps) [2024-12-09T06:00:24.902Z] Copying: 365/512 [MB] (182 MBps) [2024-12-09T06:00:24.902Z] Copying: 512/512 [MB] (average 172 MBps) 01:01:30.315 01:01:30.315 06:00:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 01:01:30.315 06:00:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ sbg3syouf3y2dhhr6dibs79xjj6s1mk22fw6di3ek8qno5paapfqs4qjjg396f2supthdte8tyfgarwpbviry5bxfs7g6va9148lx77mzq0gsgpke52o4bkxuuogitwqy0h2rl2nn9so8ybu16se1tvt5t79q6hqdvbjpc8x5u01mcfsp9pvqcdik08jybpuwmy6poyvnaulr4yeu2dxoicq6f4znihxznlxx5qfljjxqt7uocwezly0n23ucijw8974r1uh8fpcvdnyujaw090b5kruxn7n67qayvmizdwm9udra07y93c20l6no0a7u2fb92ln2rqib7v25jvnccifbqq0fv8jnw0ges4xd5xzoajurxtuq94729ey7he9024h144f6rn0yx22hhorgldi8mqsgab8hh1malkjss4ya3dd8umh6in5d9ly31vgy50kg9h6izdgknce39nbt5ka6rm54268yynxaez967x966ktzuk8wvjmk422y3d2tfctockh16aerrjpthjhay6w7zvhaj7sel43cxnvyzynjkew9inwiv4ubb7ongrgeaq11636m0eqzbfkhkatrttig1aqa8jldr5b7ok8yd45ina4ozutbao1wsgqkzocu9y1u42kgx3wy43zgj2oem15bfc9jmkp38rz9ffnv18kejen6v36gsnb0mlbw3ockf7wm0sv3m2mw0ozkbm8n65seglb299bzrw0ymo6zzn928yyqyyuzyi66pdaqidg58usiv1tzvz1amfdoz697zpof16deq3owznw4h1x8s2obs0cmv2i5b7i1zltcnzkn7odzfotcnu243vlivjmqurj8nwck3gf0z57ccbtok0os1oit1mqkrtj6lt1o8dbx949bfzoia7k4dsvl86b8tvqzj6ksja2d65lgw238098y9knoiha25f4jxsotfoobh8e4eey75up8q6rjrtidrk2on63v6j7n09i3imwk9wnodkwb7ca8pz4j5zfwp5v == \s\b\g\3\s\y\o\u\f\3\y\2\d\h\h\r\6\d\i\b\s\7\9\x\j\j\6\s\1\m\k\2\2\f\w\6\d\i\3\e\k\8\q\n\o\5\p\a\a\p\f\q\s\4\q\j\j\g\3\9\6\f\2\s\u\p\t\h\d\t\e\8\t\y\f\g\a\r\w\p\b\v\i\r\y\5\b\x\f\s\7\g\6\v\a\9\1\4\8\l\x\7\7\m\z\q\0\g\s\g\p\k\e\5\2\o\4\b\k\x\u\u\o\g\i\t\w\q\y\0\h\2\r\l\2\n\n\9\s\o\8\y\b\u\1\6\s\e\1\t\v\t\5\t\7\9\q\6\h\q\d\v\b\j\p\c\8\x\5\u\0\1\m\c\f\s\p\9\p\v\q\c\d\i\k\0\8\j\y\b\p\u\w\m\y\6\p\o\y\v\n\a\u\l\r\4\y\e\u\2\d\x\o\i\c\q\6\f\4\z\n\i\h\x\z\n\l\x\x\5\q\f\l\j\j\x\q\t\7\u\o\c\w\e\z\l\y\0\n\2\3\u\c\i\j\w\8\9\7\4\r\1\u\h\8\f\p\c\v\d\n\y\u\j\a\w\0\9\0\b\5\k\r\u\x\n\7\n\6\7\q\a\y\v\m\i\z\d\w\m\9\u\d\r\a\0\7\y\9\3\c\2\0\l\6\n\o\0\a\7\u\2\f\b\9\2\l\n\2\r\q\i\b\7\v\2\5\j\v\n\c\c\i\f\b\q\q\0\f\v\8\j\n\w\0\g\e\s\4\x\d\5\x\z\o\a\j\u\r\x\t\u\q\9\4\7\2\9\e\y\7\h\e\9\0\2\4\h\1\4\4\f\6\r\n\0\y\x\2\2\h\h\o\r\g\l\d\i\8\m\q\s\g\a\b\8\h\h\1\m\a\l\k\j\s\s\4\y\a\3\d\d\8\u\m\h\6\i\n\5\d\9\l\y\3\1\v\g\y\5\0\k\g\9\h\6\i\z\d\g\k\n\c\e\3\9\n\b\t\5\k\a\6\r\m\5\4\2\6\8\y\y\n\x\a\e\z\9\6\7\x\9\6\6\k\t\z\u\k\8\w\v\j\m\k\4\2\2\y\3\d\2\t\f\c\t\o\c\k\h\1\6\a\e\r\r\j\p\t\h\j\h\a\y\6\w\7\z\v\h\a\j\7\s\e\l\4\3\c\x\n\v\y\z\y\n\j\k\e\w\9\i\n\w\i\v\4\u\b\b\7\o\n\g\r\g\e\a\q\1\1\6\3\6\m\0\e\q\z\b\f\k\h\k\a\t\r\t\t\i\g\1\a\q\a\8\j\l\d\r\5\b\7\o\k\8\y\d\4\5\i\n\a\4\o\z\u\t\b\a\o\1\w\s\g\q\k\z\o\c\u\9\y\1\u\4\2\k\g\x\3\w\y\4\3\z\g\j\2\o\e\m\1\5\b\f\c\9\j\m\k\p\3\8\r\z\9\f\f\n\v\1\8\k\e\j\e\n\6\v\3\6\g\s\n\b\0\m\l\b\w\3\o\c\k\f\7\w\m\0\s\v\3\m\2\m\w\0\o\z\k\b\m\8\n\6\5\s\e\g\l\b\2\9\9\b\z\r\w\0\y\m\o\6\z\z\n\9\2\8\y\y\q\y\y\u\z\y\i\6\6\p\d\a\q\i\d\g\5\8\u\s\i\v\1\t\z\v\z\1\a\m\f\d\o\z\6\9\7\z\p\o\f\1\6\d\e\q\3\o\w\z\n\w\4\h\1\x\8\s\2\o\b\s\0\c\m\v\2\i\5\b\7\i\1\z\l\t\c\n\z\k\n\7\o\d\z\f\o\t\c\n\u\2\4\3\v\l\i\v\j\m\q\u\r\j\8\n\w\c\k\3\g\f\0\z\5\7\c\c\b\t\o\k\0\o\s\1\o\i\t\1\m\q\k\r\t\j\6\l\t\1\o\8\d\b\x\9\4\9\b\f\z\o\i\a\7\k\4\d\s\v\l\8\6\b\8\t\v\q\z\j\6\k\s\j\a\2\d\6\5\l\g\w\2\3\8\0\9\8\y\9\k\n\o\i\h\a\2\5\f\4\j\x\s\o\t\f\o\o\b\h\8\e\4\e\e\y\7\5\u\p\8\q\6\r\j\r\t\i\d\r\k\2\o\n\6\3\v\6\j\7\n\0\9\i\3\i\m\w\k\9\w\n\o\d\k\w\b\7\c\a\8\p\z\4\j\5\z\f\w\p\5\v ]] 01:01:30.315 06:00:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 01:01:30.316 06:00:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ sbg3syouf3y2dhhr6dibs79xjj6s1mk22fw6di3ek8qno5paapfqs4qjjg396f2supthdte8tyfgarwpbviry5bxfs7g6va9148lx77mzq0gsgpke52o4bkxuuogitwqy0h2rl2nn9so8ybu16se1tvt5t79q6hqdvbjpc8x5u01mcfsp9pvqcdik08jybpuwmy6poyvnaulr4yeu2dxoicq6f4znihxznlxx5qfljjxqt7uocwezly0n23ucijw8974r1uh8fpcvdnyujaw090b5kruxn7n67qayvmizdwm9udra07y93c20l6no0a7u2fb92ln2rqib7v25jvnccifbqq0fv8jnw0ges4xd5xzoajurxtuq94729ey7he9024h144f6rn0yx22hhorgldi8mqsgab8hh1malkjss4ya3dd8umh6in5d9ly31vgy50kg9h6izdgknce39nbt5ka6rm54268yynxaez967x966ktzuk8wvjmk422y3d2tfctockh16aerrjpthjhay6w7zvhaj7sel43cxnvyzynjkew9inwiv4ubb7ongrgeaq11636m0eqzbfkhkatrttig1aqa8jldr5b7ok8yd45ina4ozutbao1wsgqkzocu9y1u42kgx3wy43zgj2oem15bfc9jmkp38rz9ffnv18kejen6v36gsnb0mlbw3ockf7wm0sv3m2mw0ozkbm8n65seglb299bzrw0ymo6zzn928yyqyyuzyi66pdaqidg58usiv1tzvz1amfdoz697zpof16deq3owznw4h1x8s2obs0cmv2i5b7i1zltcnzkn7odzfotcnu243vlivjmqurj8nwck3gf0z57ccbtok0os1oit1mqkrtj6lt1o8dbx949bfzoia7k4dsvl86b8tvqzj6ksja2d65lgw238098y9knoiha25f4jxsotfoobh8e4eey75up8q6rjrtidrk2on63v6j7n09i3imwk9wnodkwb7ca8pz4j5zfwp5v == \s\b\g\3\s\y\o\u\f\3\y\2\d\h\h\r\6\d\i\b\s\7\9\x\j\j\6\s\1\m\k\2\2\f\w\6\d\i\3\e\k\8\q\n\o\5\p\a\a\p\f\q\s\4\q\j\j\g\3\9\6\f\2\s\u\p\t\h\d\t\e\8\t\y\f\g\a\r\w\p\b\v\i\r\y\5\b\x\f\s\7\g\6\v\a\9\1\4\8\l\x\7\7\m\z\q\0\g\s\g\p\k\e\5\2\o\4\b\k\x\u\u\o\g\i\t\w\q\y\0\h\2\r\l\2\n\n\9\s\o\8\y\b\u\1\6\s\e\1\t\v\t\5\t\7\9\q\6\h\q\d\v\b\j\p\c\8\x\5\u\0\1\m\c\f\s\p\9\p\v\q\c\d\i\k\0\8\j\y\b\p\u\w\m\y\6\p\o\y\v\n\a\u\l\r\4\y\e\u\2\d\x\o\i\c\q\6\f\4\z\n\i\h\x\z\n\l\x\x\5\q\f\l\j\j\x\q\t\7\u\o\c\w\e\z\l\y\0\n\2\3\u\c\i\j\w\8\9\7\4\r\1\u\h\8\f\p\c\v\d\n\y\u\j\a\w\0\9\0\b\5\k\r\u\x\n\7\n\6\7\q\a\y\v\m\i\z\d\w\m\9\u\d\r\a\0\7\y\9\3\c\2\0\l\6\n\o\0\a\7\u\2\f\b\9\2\l\n\2\r\q\i\b\7\v\2\5\j\v\n\c\c\i\f\b\q\q\0\f\v\8\j\n\w\0\g\e\s\4\x\d\5\x\z\o\a\j\u\r\x\t\u\q\9\4\7\2\9\e\y\7\h\e\9\0\2\4\h\1\4\4\f\6\r\n\0\y\x\2\2\h\h\o\r\g\l\d\i\8\m\q\s\g\a\b\8\h\h\1\m\a\l\k\j\s\s\4\y\a\3\d\d\8\u\m\h\6\i\n\5\d\9\l\y\3\1\v\g\y\5\0\k\g\9\h\6\i\z\d\g\k\n\c\e\3\9\n\b\t\5\k\a\6\r\m\5\4\2\6\8\y\y\n\x\a\e\z\9\6\7\x\9\6\6\k\t\z\u\k\8\w\v\j\m\k\4\2\2\y\3\d\2\t\f\c\t\o\c\k\h\1\6\a\e\r\r\j\p\t\h\j\h\a\y\6\w\7\z\v\h\a\j\7\s\e\l\4\3\c\x\n\v\y\z\y\n\j\k\e\w\9\i\n\w\i\v\4\u\b\b\7\o\n\g\r\g\e\a\q\1\1\6\3\6\m\0\e\q\z\b\f\k\h\k\a\t\r\t\t\i\g\1\a\q\a\8\j\l\d\r\5\b\7\o\k\8\y\d\4\5\i\n\a\4\o\z\u\t\b\a\o\1\w\s\g\q\k\z\o\c\u\9\y\1\u\4\2\k\g\x\3\w\y\4\3\z\g\j\2\o\e\m\1\5\b\f\c\9\j\m\k\p\3\8\r\z\9\f\f\n\v\1\8\k\e\j\e\n\6\v\3\6\g\s\n\b\0\m\l\b\w\3\o\c\k\f\7\w\m\0\s\v\3\m\2\m\w\0\o\z\k\b\m\8\n\6\5\s\e\g\l\b\2\9\9\b\z\r\w\0\y\m\o\6\z\z\n\9\2\8\y\y\q\y\y\u\z\y\i\6\6\p\d\a\q\i\d\g\5\8\u\s\i\v\1\t\z\v\z\1\a\m\f\d\o\z\6\9\7\z\p\o\f\1\6\d\e\q\3\o\w\z\n\w\4\h\1\x\8\s\2\o\b\s\0\c\m\v\2\i\5\b\7\i\1\z\l\t\c\n\z\k\n\7\o\d\z\f\o\t\c\n\u\2\4\3\v\l\i\v\j\m\q\u\r\j\8\n\w\c\k\3\g\f\0\z\5\7\c\c\b\t\o\k\0\o\s\1\o\i\t\1\m\q\k\r\t\j\6\l\t\1\o\8\d\b\x\9\4\9\b\f\z\o\i\a\7\k\4\d\s\v\l\8\6\b\8\t\v\q\z\j\6\k\s\j\a\2\d\6\5\l\g\w\2\3\8\0\9\8\y\9\k\n\o\i\h\a\2\5\f\4\j\x\s\o\t\f\o\o\b\h\8\e\4\e\e\y\7\5\u\p\8\q\6\r\j\r\t\i\d\r\k\2\o\n\6\3\v\6\j\7\n\0\9\i\3\i\m\w\k\9\w\n\o\d\k\w\b\7\c\a\8\p\z\4\j\5\z\f\w\p\5\v ]] 01:01:30.316 06:00:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 01:01:30.884 06:00:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 01:01:30.884 06:00:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:01:30.884 06:00:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 01:01:30.884 06:00:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:01:30.884 [2024-12-09 06:00:25.301742] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:30.884 [2024-12-09 06:00:25.301948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61100 ] 01:01:30.884 { 01:01:30.884 "subsystems": [ 01:01:30.884 { 01:01:30.884 "subsystem": "bdev", 01:01:30.884 "config": [ 01:01:30.884 { 01:01:30.884 "params": { 01:01:30.884 "block_size": 512, 01:01:30.884 "num_blocks": 1048576, 01:01:30.884 "name": "malloc0" 01:01:30.884 }, 01:01:30.884 "method": "bdev_malloc_create" 01:01:30.884 }, 01:01:30.884 { 01:01:30.884 "params": { 01:01:30.884 "filename": "/dev/zram1", 01:01:30.884 "name": "uring0" 01:01:30.884 }, 01:01:30.884 "method": "bdev_uring_create" 01:01:30.884 }, 01:01:30.884 { 01:01:30.884 "method": "bdev_wait_for_examine" 01:01:30.884 } 01:01:30.884 ] 01:01:30.884 } 01:01:30.884 ] 01:01:30.884 } 01:01:30.884 [2024-12-09 06:00:25.451197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:31.143 [2024-12-09 06:00:25.495648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:31.143 [2024-12-09 06:00:25.538524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:32.522  [2024-12-09T06:00:28.047Z] Copying: 206/512 [MB] (206 MBps) [2024-12-09T06:00:28.306Z] Copying: 412/512 [MB] (205 MBps) [2024-12-09T06:00:28.564Z] Copying: 512/512 [MB] (average 206 MBps) 01:01:33.977 01:01:33.977 06:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 01:01:33.977 06:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 01:01:33.977 06:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 01:01:33.977 06:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 01:01:33.977 06:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 01:01:33.977 06:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 01:01:33.977 06:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:01:33.977 06:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:01:33.977 [2024-12-09 06:00:28.547233] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:33.977 [2024-12-09 06:00:28.547457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61145 ] 01:01:33.977 { 01:01:33.977 "subsystems": [ 01:01:33.978 { 01:01:33.978 "subsystem": "bdev", 01:01:33.978 "config": [ 01:01:33.978 { 01:01:33.978 "params": { 01:01:33.978 "block_size": 512, 01:01:33.978 "num_blocks": 1048576, 01:01:33.978 "name": "malloc0" 01:01:33.978 }, 01:01:33.978 "method": "bdev_malloc_create" 01:01:33.978 }, 01:01:33.978 { 01:01:33.978 "params": { 01:01:33.978 "filename": "/dev/zram1", 01:01:33.978 "name": "uring0" 01:01:33.978 }, 01:01:33.978 "method": "bdev_uring_create" 01:01:33.978 }, 01:01:33.978 { 01:01:33.978 "params": { 01:01:33.978 "name": "uring0" 01:01:33.978 }, 01:01:33.978 "method": "bdev_uring_delete" 01:01:33.978 }, 01:01:33.978 { 01:01:33.978 "method": "bdev_wait_for_examine" 01:01:33.978 } 01:01:33.978 ] 01:01:33.978 } 01:01:33.978 ] 01:01:33.978 } 01:01:34.237 [2024-12-09 06:00:28.699114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:34.237 [2024-12-09 06:00:28.740972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:34.237 [2024-12-09 06:00:28.783897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:34.496  [2024-12-09T06:00:29.342Z] Copying: 0/0 [B] (average 0 Bps) 01:01:34.755 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:34.755 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:34.756 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:34.756 06:00:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 01:01:34.756 [2024-12-09 06:00:29.321297] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:34.756 [2024-12-09 06:00:29.321366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61174 ] 01:01:34.756 { 01:01:34.756 "subsystems": [ 01:01:34.756 { 01:01:34.756 "subsystem": "bdev", 01:01:34.756 "config": [ 01:01:34.756 { 01:01:34.756 "params": { 01:01:34.756 "block_size": 512, 01:01:34.756 "num_blocks": 1048576, 01:01:34.756 "name": "malloc0" 01:01:34.756 }, 01:01:34.756 "method": "bdev_malloc_create" 01:01:34.756 }, 01:01:34.756 { 01:01:34.756 "params": { 01:01:34.756 "filename": "/dev/zram1", 01:01:34.756 "name": "uring0" 01:01:34.756 }, 01:01:34.756 "method": "bdev_uring_create" 01:01:34.756 }, 01:01:34.756 { 01:01:34.756 "params": { 01:01:34.756 "name": "uring0" 01:01:34.756 }, 01:01:34.756 "method": "bdev_uring_delete" 01:01:34.756 }, 01:01:34.756 { 01:01:34.756 "method": "bdev_wait_for_examine" 01:01:34.756 } 01:01:34.756 ] 01:01:34.756 } 01:01:34.756 ] 01:01:34.756 } 01:01:35.015 [2024-12-09 06:00:29.473632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:35.015 [2024-12-09 06:00:29.516289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:35.015 [2024-12-09 06:00:29.559123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:35.274 [2024-12-09 06:00:29.732222] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 01:01:35.274 [2024-12-09 06:00:29.732503] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 01:01:35.275 [2024-12-09 06:00:29.732521] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 01:01:35.275 [2024-12-09 06:00:29.732532] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:35.534 [2024-12-09 06:00:29.985045] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 01:01:35.534 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 01:01:35.793 01:01:35.793 real 0m13.141s 01:01:35.793 user 0m8.687s 01:01:35.793 sys 0m11.981s 01:01:35.793 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:35.793 ************************************ 01:01:35.793 END TEST dd_uring_copy 01:01:35.793 ************************************ 01:01:35.793 06:00:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:01:35.793 01:01:35.793 real 0m13.463s 01:01:35.793 user 0m8.841s 01:01:35.793 sys 0m12.157s 01:01:35.793 ************************************ 01:01:35.793 END TEST spdk_dd_uring 01:01:35.793 ************************************ 01:01:35.793 06:00:30 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:35.793 06:00:30 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 01:01:36.051 06:00:30 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 01:01:36.051 06:00:30 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:36.051 06:00:30 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:36.051 06:00:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:01:36.051 ************************************ 01:01:36.051 START TEST spdk_dd_sparse 01:01:36.051 ************************************ 01:01:36.051 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 01:01:36.051 * Looking for test storage... 01:01:36.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:01:36.051 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:36.051 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 01:01:36.051 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:36.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:36.310 --rc genhtml_branch_coverage=1 01:01:36.310 --rc genhtml_function_coverage=1 01:01:36.310 --rc genhtml_legend=1 01:01:36.310 --rc geninfo_all_blocks=1 01:01:36.310 --rc geninfo_unexecuted_blocks=1 01:01:36.310 01:01:36.310 ' 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:36.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:36.310 --rc genhtml_branch_coverage=1 01:01:36.310 --rc genhtml_function_coverage=1 01:01:36.310 --rc genhtml_legend=1 01:01:36.310 --rc geninfo_all_blocks=1 01:01:36.310 --rc geninfo_unexecuted_blocks=1 01:01:36.310 01:01:36.310 ' 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:36.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:36.310 --rc genhtml_branch_coverage=1 01:01:36.310 --rc genhtml_function_coverage=1 01:01:36.310 --rc genhtml_legend=1 01:01:36.310 --rc geninfo_all_blocks=1 01:01:36.310 --rc geninfo_unexecuted_blocks=1 01:01:36.310 01:01:36.310 ' 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:36.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:36.310 --rc genhtml_branch_coverage=1 01:01:36.310 --rc genhtml_function_coverage=1 01:01:36.310 --rc genhtml_legend=1 01:01:36.310 --rc geninfo_all_blocks=1 01:01:36.310 --rc geninfo_unexecuted_blocks=1 01:01:36.310 01:01:36.310 ' 01:01:36.310 06:00:30 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 01:01:36.311 1+0 records in 01:01:36.311 1+0 records out 01:01:36.311 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0109971 s, 381 MB/s 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 01:01:36.311 1+0 records in 01:01:36.311 1+0 records out 01:01:36.311 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00966906 s, 434 MB/s 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 01:01:36.311 1+0 records in 01:01:36.311 1+0 records out 01:01:36.311 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00605501 s, 693 MB/s 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:01:36.311 ************************************ 01:01:36.311 START TEST dd_sparse_file_to_file 01:01:36.311 ************************************ 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 01:01:36.311 06:00:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 01:01:36.311 [2024-12-09 06:00:30.798101] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:36.311 [2024-12-09 06:00:30.798201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61268 ] 01:01:36.311 { 01:01:36.311 "subsystems": [ 01:01:36.311 { 01:01:36.311 "subsystem": "bdev", 01:01:36.311 "config": [ 01:01:36.311 { 01:01:36.311 "params": { 01:01:36.311 "block_size": 4096, 01:01:36.311 "filename": "dd_sparse_aio_disk", 01:01:36.311 "name": "dd_aio" 01:01:36.311 }, 01:01:36.311 "method": "bdev_aio_create" 01:01:36.311 }, 01:01:36.311 { 01:01:36.311 "params": { 01:01:36.311 "lvs_name": "dd_lvstore", 01:01:36.311 "bdev_name": "dd_aio" 01:01:36.311 }, 01:01:36.311 "method": "bdev_lvol_create_lvstore" 01:01:36.311 }, 01:01:36.311 { 01:01:36.311 "method": "bdev_wait_for_examine" 01:01:36.311 } 01:01:36.311 ] 01:01:36.311 } 01:01:36.311 ] 01:01:36.311 } 01:01:36.569 [2024-12-09 06:00:30.947753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:36.569 [2024-12-09 06:00:30.992630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:36.569 [2024-12-09 06:00:31.036875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:36.569  [2024-12-09T06:00:31.414Z] Copying: 12/36 [MB] (average 857 MBps) 01:01:36.827 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 01:01:36.827 01:01:36.827 real 0m0.608s 01:01:36.827 user 0m0.347s 01:01:36.827 sys 0m0.333s 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 01:01:36.827 ************************************ 01:01:36.827 END TEST dd_sparse_file_to_file 01:01:36.827 ************************************ 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:36.827 06:00:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:01:37.086 ************************************ 01:01:37.086 START TEST dd_sparse_file_to_bdev 01:01:37.086 ************************************ 01:01:37.086 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 01:01:37.086 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 01:01:37.086 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 01:01:37.086 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 01:01:37.086 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 01:01:37.086 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 01:01:37.086 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 01:01:37.086 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:01:37.086 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:37.086 [2024-12-09 06:00:31.477932] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:37.086 [2024-12-09 06:00:31.478010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61317 ] 01:01:37.086 { 01:01:37.086 "subsystems": [ 01:01:37.086 { 01:01:37.086 "subsystem": "bdev", 01:01:37.086 "config": [ 01:01:37.086 { 01:01:37.086 "params": { 01:01:37.086 "block_size": 4096, 01:01:37.086 "filename": "dd_sparse_aio_disk", 01:01:37.086 "name": "dd_aio" 01:01:37.086 }, 01:01:37.086 "method": "bdev_aio_create" 01:01:37.086 }, 01:01:37.086 { 01:01:37.086 "params": { 01:01:37.086 "lvs_name": "dd_lvstore", 01:01:37.086 "lvol_name": "dd_lvol", 01:01:37.086 "size_in_mib": 36, 01:01:37.086 "thin_provision": true 01:01:37.086 }, 01:01:37.086 "method": "bdev_lvol_create" 01:01:37.086 }, 01:01:37.086 { 01:01:37.086 "method": "bdev_wait_for_examine" 01:01:37.086 } 01:01:37.086 ] 01:01:37.086 } 01:01:37.086 ] 01:01:37.086 } 01:01:37.086 [2024-12-09 06:00:31.627998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:37.344 [2024-12-09 06:00:31.677011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:37.344 [2024-12-09 06:00:31.724217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:37.344  [2024-12-09T06:00:32.189Z] Copying: 12/36 [MB] (average 413 MBps) 01:01:37.602 01:01:37.602 01:01:37.602 real 0m0.569s 01:01:37.602 user 0m0.349s 01:01:37.602 sys 0m0.316s 01:01:37.602 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:37.602 06:00:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:01:37.602 ************************************ 01:01:37.602 END TEST dd_sparse_file_to_bdev 01:01:37.602 ************************************ 01:01:37.602 06:00:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 01:01:37.602 06:00:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:37.602 06:00:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:37.602 06:00:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:01:37.602 ************************************ 01:01:37.602 START TEST dd_sparse_bdev_to_file 01:01:37.602 ************************************ 01:01:37.602 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 01:01:37.602 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 01:01:37.602 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 01:01:37.603 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 01:01:37.603 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 01:01:37.603 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 01:01:37.603 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 01:01:37.603 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 01:01:37.603 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 01:01:37.603 [2024-12-09 06:00:32.117655] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:37.603 [2024-12-09 06:00:32.117731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61351 ] 01:01:37.603 { 01:01:37.603 "subsystems": [ 01:01:37.603 { 01:01:37.603 "subsystem": "bdev", 01:01:37.603 "config": [ 01:01:37.603 { 01:01:37.603 "params": { 01:01:37.603 "block_size": 4096, 01:01:37.603 "filename": "dd_sparse_aio_disk", 01:01:37.603 "name": "dd_aio" 01:01:37.603 }, 01:01:37.603 "method": "bdev_aio_create" 01:01:37.603 }, 01:01:37.603 { 01:01:37.603 "method": "bdev_wait_for_examine" 01:01:37.603 } 01:01:37.603 ] 01:01:37.603 } 01:01:37.603 ] 01:01:37.603 } 01:01:37.860 [2024-12-09 06:00:32.268158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:37.860 [2024-12-09 06:00:32.316984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:37.860 [2024-12-09 06:00:32.363785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:37.860  [2024-12-09T06:00:32.706Z] Copying: 12/36 [MB] (average 800 MBps) 01:01:38.119 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 01:01:38.119 01:01:38.119 real 0m0.576s 01:01:38.119 user 0m0.334s 01:01:38.119 sys 0m0.326s 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:38.119 ************************************ 01:01:38.119 END TEST dd_sparse_bdev_to_file 01:01:38.119 ************************************ 01:01:38.119 06:00:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 01:01:38.378 06:00:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 01:01:38.378 06:00:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 01:01:38.378 06:00:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 01:01:38.378 06:00:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 01:01:38.378 06:00:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 01:01:38.378 01:01:38.378 real 0m2.300s 01:01:38.378 user 0m1.234s 01:01:38.378 sys 0m1.325s 01:01:38.378 06:00:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:38.378 06:00:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:01:38.378 ************************************ 01:01:38.378 END TEST spdk_dd_sparse 01:01:38.378 ************************************ 01:01:38.378 06:00:32 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 01:01:38.378 06:00:32 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:38.378 06:00:32 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:38.378 06:00:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:01:38.378 ************************************ 01:01:38.378 START TEST spdk_dd_negative 01:01:38.378 ************************************ 01:01:38.378 06:00:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 01:01:38.378 * Looking for test storage... 01:01:38.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:01:38.378 06:00:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:38.653 06:00:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 01:01:38.653 06:00:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:38.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:38.653 --rc genhtml_branch_coverage=1 01:01:38.653 --rc genhtml_function_coverage=1 01:01:38.653 --rc genhtml_legend=1 01:01:38.653 --rc geninfo_all_blocks=1 01:01:38.653 --rc geninfo_unexecuted_blocks=1 01:01:38.653 01:01:38.653 ' 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:38.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:38.653 --rc genhtml_branch_coverage=1 01:01:38.653 --rc genhtml_function_coverage=1 01:01:38.653 --rc genhtml_legend=1 01:01:38.653 --rc geninfo_all_blocks=1 01:01:38.653 --rc geninfo_unexecuted_blocks=1 01:01:38.653 01:01:38.653 ' 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:38.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:38.653 --rc genhtml_branch_coverage=1 01:01:38.653 --rc genhtml_function_coverage=1 01:01:38.653 --rc genhtml_legend=1 01:01:38.653 --rc geninfo_all_blocks=1 01:01:38.653 --rc geninfo_unexecuted_blocks=1 01:01:38.653 01:01:38.653 ' 01:01:38.653 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:38.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:38.653 --rc genhtml_branch_coverage=1 01:01:38.653 --rc genhtml_function_coverage=1 01:01:38.653 --rc genhtml_legend=1 01:01:38.653 --rc geninfo_all_blocks=1 01:01:38.654 --rc geninfo_unexecuted_blocks=1 01:01:38.654 01:01:38.654 ' 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:38.654 ************************************ 01:01:38.654 START TEST dd_invalid_arguments 01:01:38.654 ************************************ 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:38.654 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 01:01:38.654 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 01:01:38.654 01:01:38.654 CPU options: 01:01:38.654 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 01:01:38.654 (like [0,1,10]) 01:01:38.654 --lcores lcore to CPU mapping list. The list is in the format: 01:01:38.654 [<,lcores[@CPUs]>...] 01:01:38.654 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 01:01:38.654 Within the group, '-' is used for range separator, 01:01:38.654 ',' is used for single number separator. 01:01:38.654 '( )' can be omitted for single element group, 01:01:38.654 '@' can be omitted if cpus and lcores have the same value 01:01:38.654 --disable-cpumask-locks Disable CPU core lock files. 01:01:38.654 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 01:01:38.654 pollers in the app support interrupt mode) 01:01:38.654 -p, --main-core main (primary) core for DPDK 01:01:38.654 01:01:38.654 Configuration options: 01:01:38.654 -c, --config, --json JSON config file 01:01:38.654 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 01:01:38.654 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 01:01:38.654 --wait-for-rpc wait for RPCs to initialize subsystems 01:01:38.654 --rpcs-allowed comma-separated list of permitted RPCS 01:01:38.654 --json-ignore-init-errors don't exit on invalid config entry 01:01:38.654 01:01:38.654 Memory options: 01:01:38.654 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 01:01:38.654 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 01:01:38.654 --huge-dir use a specific hugetlbfs mount to reserve memory from 01:01:38.654 -R, --huge-unlink unlink huge files after initialization 01:01:38.654 -n, --mem-channels number of memory channels used for DPDK 01:01:38.654 -s, --mem-size memory size in MB for DPDK (default: 0MB) 01:01:38.654 --msg-mempool-size global message memory pool size in count (default: 262143) 01:01:38.654 --no-huge run without using hugepages 01:01:38.654 --enforce-numa enforce NUMA allocations from the specified NUMA node 01:01:38.654 -i, --shm-id shared memory ID (optional) 01:01:38.654 -g, --single-file-segments force creating just one hugetlbfs file 01:01:38.654 01:01:38.654 PCI options: 01:01:38.654 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 01:01:38.654 -B, --pci-blocked pci addr to block (can be used more than once) 01:01:38.654 -u, --no-pci disable PCI access 01:01:38.654 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 01:01:38.654 01:01:38.654 Log options: 01:01:38.654 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 01:01:38.654 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 01:01:38.654 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 01:01:38.654 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 01:01:38.654 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 01:01:38.654 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 01:01:38.654 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 01:01:38.654 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 01:01:38.654 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 01:01:38.654 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 01:01:38.654 virtio_pci, virtio_user, virtio_vfio_user, vmd) 01:01:38.654 --silence-noticelog disable notice level logging to stderr 01:01:38.654 01:01:38.654 Trace options: 01:01:38.654 --num-trace-entries number of trace entries for each core, must be power of 2, 01:01:38.654 setting 0 to disable trace (default 32768) 01:01:38.654 Tracepoints vary in size and can use more than one trace entry. 01:01:38.654 -e, --tpoint-group [:] 01:01:38.654 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 01:01:38.654 [2024-12-09 06:00:33.158327] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 01:01:38.654 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 01:01:38.654 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 01:01:38.654 bdev_raid, scheduler, all). 01:01:38.654 tpoint_mask - tracepoint mask for enabling individual tpoints inside 01:01:38.654 a tracepoint group. First tpoint inside a group can be enabled by 01:01:38.654 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 01:01:38.654 combined (e.g. thread,bdev:0x1). All available tpoints can be found 01:01:38.654 in /include/spdk_internal/trace_defs.h 01:01:38.655 01:01:38.655 Other options: 01:01:38.655 -h, --help show this usage 01:01:38.655 -v, --version print SPDK version 01:01:38.655 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 01:01:38.655 --env-context Opaque context for use of the env implementation 01:01:38.655 01:01:38.655 Application specific: 01:01:38.655 [--------- DD Options ---------] 01:01:38.655 --if Input file. Must specify either --if or --ib. 01:01:38.655 --ib Input bdev. Must specifier either --if or --ib 01:01:38.655 --of Output file. Must specify either --of or --ob. 01:01:38.655 --ob Output bdev. Must specify either --of or --ob. 01:01:38.655 --iflag Input file flags. 01:01:38.655 --oflag Output file flags. 01:01:38.655 --bs I/O unit size (default: 4096) 01:01:38.655 --qd Queue depth (default: 2) 01:01:38.655 --count I/O unit count. The number of I/O units to copy. (default: all) 01:01:38.655 --skip Skip this many I/O units at start of input. (default: 0) 01:01:38.655 --seek Skip this many I/O units at start of output. (default: 0) 01:01:38.655 --aio Force usage of AIO. (by default io_uring is used if available) 01:01:38.655 --sparse Enable hole skipping in input target 01:01:38.655 Available iflag and oflag values: 01:01:38.655 append - append mode 01:01:38.655 direct - use direct I/O for data 01:01:38.655 directory - fail unless a directory 01:01:38.655 dsync - use synchronized I/O for data 01:01:38.655 noatime - do not update access time 01:01:38.655 noctty - do not assign controlling terminal from file 01:01:38.655 nofollow - do not follow symlinks 01:01:38.655 nonblock - use non-blocking I/O 01:01:38.655 sync - use synchronized I/O for data and metadata 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:38.655 01:01:38.655 real 0m0.074s 01:01:38.655 user 0m0.044s 01:01:38.655 sys 0m0.029s 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:38.655 ************************************ 01:01:38.655 END TEST dd_invalid_arguments 01:01:38.655 ************************************ 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:38.655 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:38.913 ************************************ 01:01:38.913 START TEST dd_double_input 01:01:38.913 ************************************ 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 01:01:38.913 [2024-12-09 06:00:33.310956] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:38.913 01:01:38.913 real 0m0.077s 01:01:38.913 user 0m0.037s 01:01:38.913 sys 0m0.039s 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:38.913 ************************************ 01:01:38.913 END TEST dd_double_input 01:01:38.913 ************************************ 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:38.913 ************************************ 01:01:38.913 START TEST dd_double_output 01:01:38.913 ************************************ 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 01:01:38.913 [2024-12-09 06:00:33.453542] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:38.913 01:01:38.913 real 0m0.071s 01:01:38.913 user 0m0.039s 01:01:38.913 sys 0m0.031s 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:38.913 06:00:33 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 01:01:38.913 ************************************ 01:01:38.913 END TEST dd_double_output 01:01:38.913 ************************************ 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:39.172 ************************************ 01:01:39.172 START TEST dd_no_input 01:01:39.172 ************************************ 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 01:01:39.172 [2024-12-09 06:00:33.606477] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:39.172 01:01:39.172 real 0m0.076s 01:01:39.172 user 0m0.052s 01:01:39.172 sys 0m0.024s 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 01:01:39.172 ************************************ 01:01:39.172 END TEST dd_no_input 01:01:39.172 ************************************ 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:39.172 ************************************ 01:01:39.172 START TEST dd_no_output 01:01:39.172 ************************************ 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:39.172 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:01:39.432 [2024-12-09 06:00:33.758437] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:39.432 01:01:39.432 real 0m0.076s 01:01:39.432 user 0m0.046s 01:01:39.432 sys 0m0.029s 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 01:01:39.432 ************************************ 01:01:39.432 END TEST dd_no_output 01:01:39.432 ************************************ 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:39.432 ************************************ 01:01:39.432 START TEST dd_wrong_blocksize 01:01:39.432 ************************************ 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.432 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 01:01:39.433 [2024-12-09 06:00:33.907264] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:39.433 01:01:39.433 real 0m0.071s 01:01:39.433 user 0m0.036s 01:01:39.433 sys 0m0.034s 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 01:01:39.433 ************************************ 01:01:39.433 END TEST dd_wrong_blocksize 01:01:39.433 ************************************ 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:39.433 ************************************ 01:01:39.433 START TEST dd_smaller_blocksize 01:01:39.433 ************************************ 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.433 06:00:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.433 06:00:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.433 06:00:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:39.433 06:00:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:39.433 06:00:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:39.433 06:00:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 01:01:39.692 [2024-12-09 06:00:34.058006] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:39.692 [2024-12-09 06:00:34.058076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61581 ] 01:01:39.692 [2024-12-09 06:00:34.210903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:39.692 [2024-12-09 06:00:34.260451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:39.951 [2024-12-09 06:00:34.307635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:40.210 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 01:01:40.470 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 01:01:40.470 [2024-12-09 06:00:34.867144] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 01:01:40.470 [2024-12-09 06:00:34.867194] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:40.470 [2024-12-09 06:00:34.964317] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:40.470 06:00:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 01:01:40.470 06:00:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:40.470 06:00:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 01:01:40.470 06:00:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 01:01:40.470 06:00:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 01:01:40.470 06:00:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:40.470 01:01:40.470 real 0m1.021s 01:01:40.470 user 0m0.366s 01:01:40.470 sys 0m0.549s 01:01:40.470 06:00:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:40.470 ************************************ 01:01:40.470 END TEST dd_smaller_blocksize 01:01:40.470 06:00:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 01:01:40.470 ************************************ 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:40.729 ************************************ 01:01:40.729 START TEST dd_invalid_count 01:01:40.729 ************************************ 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 01:01:40.729 [2024-12-09 06:00:35.153667] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:40.729 01:01:40.729 real 0m0.073s 01:01:40.729 user 0m0.039s 01:01:40.729 sys 0m0.034s 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 01:01:40.729 ************************************ 01:01:40.729 END TEST dd_invalid_count 01:01:40.729 ************************************ 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:40.729 ************************************ 01:01:40.729 START TEST dd_invalid_oflag 01:01:40.729 ************************************ 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 01:01:40.729 [2024-12-09 06:00:35.296654] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:40.729 01:01:40.729 real 0m0.072s 01:01:40.729 user 0m0.044s 01:01:40.729 sys 0m0.028s 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:40.729 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 01:01:40.729 ************************************ 01:01:40.729 END TEST dd_invalid_oflag 01:01:40.729 ************************************ 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:40.988 ************************************ 01:01:40.988 START TEST dd_invalid_iflag 01:01:40.988 ************************************ 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.988 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 01:01:40.989 [2024-12-09 06:00:35.445741] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:40.989 01:01:40.989 real 0m0.073s 01:01:40.989 user 0m0.038s 01:01:40.989 sys 0m0.034s 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:40.989 ************************************ 01:01:40.989 END TEST dd_invalid_iflag 01:01:40.989 ************************************ 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:40.989 ************************************ 01:01:40.989 START TEST dd_unknown_flag 01:01:40.989 ************************************ 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:40.989 06:00:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 01:01:41.248 [2024-12-09 06:00:35.593759] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:41.248 [2024-12-09 06:00:35.593824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61680 ] 01:01:41.248 [2024-12-09 06:00:35.742577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:41.248 [2024-12-09 06:00:35.791075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:41.507 [2024-12-09 06:00:35.837607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:41.507 [2024-12-09 06:00:35.866670] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 01:01:41.507 [2024-12-09 06:00:35.866945] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:41.507 [2024-12-09 06:00:35.867007] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 01:01:41.507 [2024-12-09 06:00:35.867019] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:41.507 [2024-12-09 06:00:35.867252] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 01:01:41.507 [2024-12-09 06:00:35.867265] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:41.507 [2024-12-09 06:00:35.867322] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 01:01:41.507 [2024-12-09 06:00:35.867330] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 01:01:41.507 [2024-12-09 06:00:35.962242] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 01:01:41.507 ************************************ 01:01:41.507 END TEST dd_unknown_flag 01:01:41.507 ************************************ 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:41.507 01:01:41.507 real 0m0.485s 01:01:41.507 user 0m0.238s 01:01:41.507 sys 0m0.150s 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:41.507 06:00:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:41.766 ************************************ 01:01:41.766 START TEST dd_invalid_json 01:01:41.766 ************************************ 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:41.766 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 01:01:41.766 [2024-12-09 06:00:36.164700] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:41.766 [2024-12-09 06:00:36.164784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61703 ] 01:01:41.766 [2024-12-09 06:00:36.317063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:42.026 [2024-12-09 06:00:36.366685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:42.026 [2024-12-09 06:00:36.366933] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 01:01:42.026 [2024-12-09 06:00:36.366957] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:01:42.026 [2024-12-09 06:00:36.366966] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:42.026 [2024-12-09 06:00:36.366999] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 01:01:42.026 ************************************ 01:01:42.026 END TEST dd_invalid_json 01:01:42.026 ************************************ 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:42.026 01:01:42.026 real 0m0.321s 01:01:42.026 user 0m0.141s 01:01:42.026 sys 0m0.080s 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:42.026 ************************************ 01:01:42.026 START TEST dd_invalid_seek 01:01:42.026 ************************************ 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:42.026 06:00:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 01:01:42.026 { 01:01:42.026 "subsystems": [ 01:01:42.026 { 01:01:42.026 "subsystem": "bdev", 01:01:42.026 "config": [ 01:01:42.026 { 01:01:42.026 "params": { 01:01:42.026 "block_size": 512, 01:01:42.026 "num_blocks": 512, 01:01:42.027 "name": "malloc0" 01:01:42.027 }, 01:01:42.027 "method": "bdev_malloc_create" 01:01:42.027 }, 01:01:42.027 { 01:01:42.027 "params": { 01:01:42.027 "block_size": 512, 01:01:42.027 "num_blocks": 512, 01:01:42.027 "name": "malloc1" 01:01:42.027 }, 01:01:42.027 "method": "bdev_malloc_create" 01:01:42.027 }, 01:01:42.027 { 01:01:42.027 "method": "bdev_wait_for_examine" 01:01:42.027 } 01:01:42.027 ] 01:01:42.027 } 01:01:42.027 ] 01:01:42.027 } 01:01:42.027 [2024-12-09 06:00:36.564032] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:42.027 [2024-12-09 06:00:36.564146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61738 ] 01:01:42.286 [2024-12-09 06:00:36.715053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:42.286 [2024-12-09 06:00:36.764496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:42.286 [2024-12-09 06:00:36.811873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:42.286 [2024-12-09 06:00:36.866730] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 01:01:42.286 [2024-12-09 06:00:36.866776] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:42.546 [2024-12-09 06:00:36.962767] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 01:01:42.547 ************************************ 01:01:42.547 END TEST dd_invalid_seek 01:01:42.547 ************************************ 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:42.547 01:01:42.547 real 0m0.524s 01:01:42.547 user 0m0.308s 01:01:42.547 sys 0m0.174s 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:42.547 ************************************ 01:01:42.547 START TEST dd_invalid_skip 01:01:42.547 ************************************ 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:42.547 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 01:01:42.806 { 01:01:42.806 "subsystems": [ 01:01:42.806 { 01:01:42.806 "subsystem": "bdev", 01:01:42.806 "config": [ 01:01:42.806 { 01:01:42.806 "params": { 01:01:42.806 "block_size": 512, 01:01:42.806 "num_blocks": 512, 01:01:42.806 "name": "malloc0" 01:01:42.806 }, 01:01:42.806 "method": "bdev_malloc_create" 01:01:42.806 }, 01:01:42.806 { 01:01:42.806 "params": { 01:01:42.806 "block_size": 512, 01:01:42.806 "num_blocks": 512, 01:01:42.806 "name": "malloc1" 01:01:42.806 }, 01:01:42.806 "method": "bdev_malloc_create" 01:01:42.806 }, 01:01:42.806 { 01:01:42.806 "method": "bdev_wait_for_examine" 01:01:42.806 } 01:01:42.806 ] 01:01:42.806 } 01:01:42.806 ] 01:01:42.806 } 01:01:42.807 [2024-12-09 06:00:37.163202] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:42.807 [2024-12-09 06:00:37.163277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61766 ] 01:01:42.807 [2024-12-09 06:00:37.303155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:42.807 [2024-12-09 06:00:37.348148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:43.066 [2024-12-09 06:00:37.392343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:43.066 [2024-12-09 06:00:37.446827] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 01:01:43.066 [2024-12-09 06:00:37.446874] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:43.066 [2024-12-09 06:00:37.542848] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:43.066 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 01:01:43.066 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:43.066 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 01:01:43.066 ************************************ 01:01:43.066 END TEST dd_invalid_skip 01:01:43.066 ************************************ 01:01:43.066 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 01:01:43.066 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 01:01:43.066 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:43.066 01:01:43.066 real 0m0.507s 01:01:43.066 user 0m0.308s 01:01:43.066 sys 0m0.160s 01:01:43.066 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:43.066 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:43.327 ************************************ 01:01:43.327 START TEST dd_invalid_input_count 01:01:43.327 ************************************ 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:43.327 06:00:37 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 01:01:43.327 [2024-12-09 06:00:37.743658] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:43.327 { 01:01:43.327 "subsystems": [ 01:01:43.327 { 01:01:43.327 "subsystem": "bdev", 01:01:43.327 "config": [ 01:01:43.327 { 01:01:43.327 "params": { 01:01:43.327 "block_size": 512, 01:01:43.327 "num_blocks": 512, 01:01:43.327 "name": "malloc0" 01:01:43.327 }, 01:01:43.327 "method": "bdev_malloc_create" 01:01:43.327 }, 01:01:43.327 { 01:01:43.327 "params": { 01:01:43.327 "block_size": 512, 01:01:43.327 "num_blocks": 512, 01:01:43.327 "name": "malloc1" 01:01:43.327 }, 01:01:43.327 "method": "bdev_malloc_create" 01:01:43.327 }, 01:01:43.327 { 01:01:43.327 "method": "bdev_wait_for_examine" 01:01:43.327 } 01:01:43.327 ] 01:01:43.327 } 01:01:43.327 ] 01:01:43.327 } 01:01:43.327 [2024-12-09 06:00:37.743940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61805 ] 01:01:43.327 [2024-12-09 06:00:37.893923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:43.587 [2024-12-09 06:00:37.943788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:43.587 [2024-12-09 06:00:37.991546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:43.587 [2024-12-09 06:00:38.046915] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 01:01:43.587 [2024-12-09 06:00:38.046963] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:43.587 [2024-12-09 06:00:38.142902] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:43.847 01:01:43.847 real 0m0.518s 01:01:43.847 user 0m0.319s 01:01:43.847 sys 0m0.162s 01:01:43.847 ************************************ 01:01:43.847 END TEST dd_invalid_input_count 01:01:43.847 ************************************ 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:43.847 ************************************ 01:01:43.847 START TEST dd_invalid_output_count 01:01:43.847 ************************************ 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:43.847 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 01:01:43.847 { 01:01:43.847 "subsystems": [ 01:01:43.847 { 01:01:43.847 "subsystem": "bdev", 01:01:43.847 "config": [ 01:01:43.847 { 01:01:43.847 "params": { 01:01:43.847 "block_size": 512, 01:01:43.847 "num_blocks": 512, 01:01:43.847 "name": "malloc0" 01:01:43.847 }, 01:01:43.847 "method": "bdev_malloc_create" 01:01:43.847 }, 01:01:43.847 { 01:01:43.847 "method": "bdev_wait_for_examine" 01:01:43.847 } 01:01:43.847 ] 01:01:43.847 } 01:01:43.847 ] 01:01:43.847 } 01:01:43.847 [2024-12-09 06:00:38.338524] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:43.847 [2024-12-09 06:00:38.338765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61842 ] 01:01:44.107 [2024-12-09 06:00:38.488409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:44.107 [2024-12-09 06:00:38.536236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:44.107 [2024-12-09 06:00:38.583919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:44.107 [2024-12-09 06:00:38.630824] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 01:01:44.107 [2024-12-09 06:00:38.630876] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:44.369 [2024-12-09 06:00:38.726876] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:44.369 01:01:44.369 real 0m0.509s 01:01:44.369 user 0m0.287s 01:01:44.369 sys 0m0.172s 01:01:44.369 ************************************ 01:01:44.369 END TEST dd_invalid_output_count 01:01:44.369 ************************************ 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:44.369 ************************************ 01:01:44.369 START TEST dd_bs_not_multiple 01:01:44.369 ************************************ 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:01:44.369 06:00:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 01:01:44.369 { 01:01:44.369 "subsystems": [ 01:01:44.369 { 01:01:44.369 "subsystem": "bdev", 01:01:44.369 "config": [ 01:01:44.369 { 01:01:44.369 "params": { 01:01:44.369 "block_size": 512, 01:01:44.369 "num_blocks": 512, 01:01:44.369 "name": "malloc0" 01:01:44.369 }, 01:01:44.369 "method": "bdev_malloc_create" 01:01:44.369 }, 01:01:44.369 { 01:01:44.369 "params": { 01:01:44.369 "block_size": 512, 01:01:44.369 "num_blocks": 512, 01:01:44.369 "name": "malloc1" 01:01:44.369 }, 01:01:44.369 "method": "bdev_malloc_create" 01:01:44.369 }, 01:01:44.369 { 01:01:44.369 "method": "bdev_wait_for_examine" 01:01:44.369 } 01:01:44.369 ] 01:01:44.369 } 01:01:44.369 ] 01:01:44.369 } 01:01:44.369 [2024-12-09 06:00:38.927489] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:44.369 [2024-12-09 06:00:38.927562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61870 ] 01:01:44.663 [2024-12-09 06:00:39.076618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:44.663 [2024-12-09 06:00:39.123894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:44.663 [2024-12-09 06:00:39.171359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:44.663 [2024-12-09 06:00:39.226454] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 01:01:44.663 [2024-12-09 06:00:39.226500] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:44.958 [2024-12-09 06:00:39.323130] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:44.958 ************************************ 01:01:44.958 END TEST dd_bs_not_multiple 01:01:44.958 ************************************ 01:01:44.958 01:01:44.958 real 0m0.520s 01:01:44.958 user 0m0.331s 01:01:44.958 sys 0m0.155s 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 01:01:44.958 ************************************ 01:01:44.958 END TEST spdk_dd_negative 01:01:44.958 ************************************ 01:01:44.958 01:01:44.958 real 0m6.618s 01:01:44.958 user 0m3.169s 01:01:44.958 sys 0m2.884s 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:44.958 06:00:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:01:44.958 ************************************ 01:01:44.958 END TEST spdk_dd 01:01:44.958 ************************************ 01:01:44.958 01:01:44.958 real 1m11.675s 01:01:44.958 user 0m43.013s 01:01:44.958 sys 0m34.584s 01:01:44.958 06:00:39 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:44.959 06:00:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:01:45.218 06:00:39 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 01:01:45.218 06:00:39 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 01:01:45.218 06:00:39 -- spdk/autotest.sh@260 -- # timing_exit lib 01:01:45.218 06:00:39 -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:45.218 06:00:39 -- common/autotest_common.sh@10 -- # set +x 01:01:45.218 06:00:39 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 01:01:45.218 06:00:39 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 01:01:45.218 06:00:39 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 01:01:45.218 06:00:39 -- spdk/autotest.sh@277 -- # export NET_TYPE 01:01:45.218 06:00:39 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 01:01:45.218 06:00:39 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 01:01:45.218 06:00:39 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:01:45.218 06:00:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:01:45.218 06:00:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:45.218 06:00:39 -- common/autotest_common.sh@10 -- # set +x 01:01:45.218 ************************************ 01:01:45.218 START TEST nvmf_tcp 01:01:45.218 ************************************ 01:01:45.218 06:00:39 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:01:45.218 * Looking for test storage... 01:01:45.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:01:45.218 06:00:39 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:45.218 06:00:39 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 01:01:45.218 06:00:39 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:45.478 06:00:39 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@345 -- # : 1 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:45.478 06:00:39 nvmf_tcp -- scripts/common.sh@368 -- # return 0 01:01:45.478 06:00:39 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:45.478 06:00:39 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:45.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:45.478 --rc genhtml_branch_coverage=1 01:01:45.478 --rc genhtml_function_coverage=1 01:01:45.478 --rc genhtml_legend=1 01:01:45.478 --rc geninfo_all_blocks=1 01:01:45.478 --rc geninfo_unexecuted_blocks=1 01:01:45.478 01:01:45.478 ' 01:01:45.478 06:00:39 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:45.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:45.478 --rc genhtml_branch_coverage=1 01:01:45.478 --rc genhtml_function_coverage=1 01:01:45.478 --rc genhtml_legend=1 01:01:45.478 --rc geninfo_all_blocks=1 01:01:45.478 --rc geninfo_unexecuted_blocks=1 01:01:45.478 01:01:45.478 ' 01:01:45.478 06:00:39 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:45.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:45.478 --rc genhtml_branch_coverage=1 01:01:45.478 --rc genhtml_function_coverage=1 01:01:45.478 --rc genhtml_legend=1 01:01:45.478 --rc geninfo_all_blocks=1 01:01:45.478 --rc geninfo_unexecuted_blocks=1 01:01:45.478 01:01:45.478 ' 01:01:45.478 06:00:39 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:45.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:45.478 --rc genhtml_branch_coverage=1 01:01:45.478 --rc genhtml_function_coverage=1 01:01:45.478 --rc genhtml_legend=1 01:01:45.478 --rc geninfo_all_blocks=1 01:01:45.478 --rc geninfo_unexecuted_blocks=1 01:01:45.478 01:01:45.478 ' 01:01:45.478 06:00:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 01:01:45.478 06:00:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 01:01:45.478 06:00:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 01:01:45.478 06:00:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:01:45.478 06:00:39 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:45.478 06:00:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:01:45.478 ************************************ 01:01:45.478 START TEST nvmf_target_core 01:01:45.478 ************************************ 01:01:45.478 06:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 01:01:45.478 * Looking for test storage... 01:01:45.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:01:45.478 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:45.478 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 01:01:45.478 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:45.738 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:45.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:45.738 --rc genhtml_branch_coverage=1 01:01:45.738 --rc genhtml_function_coverage=1 01:01:45.738 --rc genhtml_legend=1 01:01:45.738 --rc geninfo_all_blocks=1 01:01:45.738 --rc geninfo_unexecuted_blocks=1 01:01:45.738 01:01:45.738 ' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:45.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:45.739 --rc genhtml_branch_coverage=1 01:01:45.739 --rc genhtml_function_coverage=1 01:01:45.739 --rc genhtml_legend=1 01:01:45.739 --rc geninfo_all_blocks=1 01:01:45.739 --rc geninfo_unexecuted_blocks=1 01:01:45.739 01:01:45.739 ' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:45.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:45.739 --rc genhtml_branch_coverage=1 01:01:45.739 --rc genhtml_function_coverage=1 01:01:45.739 --rc genhtml_legend=1 01:01:45.739 --rc geninfo_all_blocks=1 01:01:45.739 --rc geninfo_unexecuted_blocks=1 01:01:45.739 01:01:45.739 ' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:45.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:45.739 --rc genhtml_branch_coverage=1 01:01:45.739 --rc genhtml_function_coverage=1 01:01:45.739 --rc genhtml_legend=1 01:01:45.739 --rc geninfo_all_blocks=1 01:01:45.739 --rc geninfo_unexecuted_blocks=1 01:01:45.739 01:01:45.739 ' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:01:45.739 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:01:45.739 ************************************ 01:01:45.739 START TEST nvmf_host_management 01:01:45.739 ************************************ 01:01:45.739 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:01:46.000 * Looking for test storage... 01:01:46.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:46.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:46.000 --rc genhtml_branch_coverage=1 01:01:46.000 --rc genhtml_function_coverage=1 01:01:46.000 --rc genhtml_legend=1 01:01:46.000 --rc geninfo_all_blocks=1 01:01:46.000 --rc geninfo_unexecuted_blocks=1 01:01:46.000 01:01:46.000 ' 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:46.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:46.000 --rc genhtml_branch_coverage=1 01:01:46.000 --rc genhtml_function_coverage=1 01:01:46.000 --rc genhtml_legend=1 01:01:46.000 --rc geninfo_all_blocks=1 01:01:46.000 --rc geninfo_unexecuted_blocks=1 01:01:46.000 01:01:46.000 ' 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:46.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:46.000 --rc genhtml_branch_coverage=1 01:01:46.000 --rc genhtml_function_coverage=1 01:01:46.000 --rc genhtml_legend=1 01:01:46.000 --rc geninfo_all_blocks=1 01:01:46.000 --rc geninfo_unexecuted_blocks=1 01:01:46.000 01:01:46.000 ' 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:46.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:46.000 --rc genhtml_branch_coverage=1 01:01:46.000 --rc genhtml_function_coverage=1 01:01:46.000 --rc genhtml_legend=1 01:01:46.000 --rc geninfo_all_blocks=1 01:01:46.000 --rc geninfo_unexecuted_blocks=1 01:01:46.000 01:01:46.000 ' 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:46.000 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:01:46.001 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:01:46.001 Cannot find device "nvmf_init_br" 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:01:46.001 Cannot find device "nvmf_init_br2" 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:01:46.001 Cannot find device "nvmf_tgt_br" 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 01:01:46.001 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:01:46.261 Cannot find device "nvmf_tgt_br2" 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:01:46.261 Cannot find device "nvmf_init_br" 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:01:46.261 Cannot find device "nvmf_init_br2" 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:01:46.261 Cannot find device "nvmf_tgt_br" 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:01:46.261 Cannot find device "nvmf_tgt_br2" 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:01:46.261 Cannot find device "nvmf_br" 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:01:46.261 Cannot find device "nvmf_init_if" 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:01:46.261 Cannot find device "nvmf_init_if2" 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:46.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:46.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:46.261 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:01:46.521 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:01:46.522 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:01:46.522 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:01:46.522 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:46.522 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:46.522 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:46.522 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:01:46.522 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:01:46.522 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:01:46.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:46.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.134 ms 01:01:46.782 01:01:46.782 --- 10.0.0.3 ping statistics --- 01:01:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:46.782 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:01:46.782 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:01:46.782 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 01:01:46.782 01:01:46.782 --- 10.0.0.4 ping statistics --- 01:01:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:46.782 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:46.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:46.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 01:01:46.782 01:01:46.782 --- 10.0.0.1 ping statistics --- 01:01:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:46.782 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:01:46.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:46.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 01:01:46.782 01:01:46.782 --- 10.0.0.2 ping statistics --- 01:01:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:46.782 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:46.782 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62217 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62217 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62217 ']' 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:46.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:46.783 06:00:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:46.783 [2024-12-09 06:00:41.274385] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:46.783 [2024-12-09 06:00:41.274621] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:47.042 [2024-12-09 06:00:41.429495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:01:47.043 [2024-12-09 06:00:41.471964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:47.043 [2024-12-09 06:00:41.472004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:47.043 [2024-12-09 06:00:41.472013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:47.043 [2024-12-09 06:00:41.472022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:47.043 [2024-12-09 06:00:41.472029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:47.043 [2024-12-09 06:00:41.472938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:01:47.043 [2024-12-09 06:00:41.473227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:01:47.043 [2024-12-09 06:00:41.473284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:47.043 [2024-12-09 06:00:41.473288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:01:47.043 [2024-12-09 06:00:41.515558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:47.612 [2024-12-09 06:00:42.185238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:47.612 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:47.872 Malloc0 01:01:47.872 [2024-12-09 06:00:42.262435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62271 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62271 /var/tmp/bdevperf.sock 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62271 ']' 01:01:47.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:01:47.872 { 01:01:47.872 "params": { 01:01:47.872 "name": "Nvme$subsystem", 01:01:47.872 "trtype": "$TEST_TRANSPORT", 01:01:47.872 "traddr": "$NVMF_FIRST_TARGET_IP", 01:01:47.872 "adrfam": "ipv4", 01:01:47.872 "trsvcid": "$NVMF_PORT", 01:01:47.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:01:47.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:01:47.872 "hdgst": ${hdgst:-false}, 01:01:47.872 "ddgst": ${ddgst:-false} 01:01:47.872 }, 01:01:47.872 "method": "bdev_nvme_attach_controller" 01:01:47.872 } 01:01:47.872 EOF 01:01:47.872 )") 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:01:47.872 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:01:47.872 "params": { 01:01:47.872 "name": "Nvme0", 01:01:47.872 "trtype": "tcp", 01:01:47.872 "traddr": "10.0.0.3", 01:01:47.872 "adrfam": "ipv4", 01:01:47.872 "trsvcid": "4420", 01:01:47.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:01:47.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:01:47.872 "hdgst": false, 01:01:47.872 "ddgst": false 01:01:47.872 }, 01:01:47.872 "method": "bdev_nvme_attach_controller" 01:01:47.872 }' 01:01:47.872 [2024-12-09 06:00:42.384283] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:47.872 [2024-12-09 06:00:42.384479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62271 ] 01:01:48.131 [2024-12-09 06:00:42.535452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:48.131 [2024-12-09 06:00:42.578482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:48.131 [2024-12-09 06:00:42.629355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:48.389 Running I/O for 10 seconds... 01:01:48.957 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:48.957 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:01:48.957 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 01:01:48.957 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:48.957 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:48.957 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:48.957 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:01:48.957 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:48.958 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:48.958 [2024-12-09 06:00:43.318582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.318775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.318930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.319986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.319994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.320981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.320993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.958 [2024-12-09 06:00:43.321191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.958 [2024-12-09 06:00:43.321200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.321211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.959 [2024-12-09 06:00:43.321219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.321229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.959 [2024-12-09 06:00:43.321238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.321248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.959 [2024-12-09 06:00:43.321256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.321266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:01:48.959 [2024-12-09 06:00:43.321275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.321284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03c00 is same with the state(6) to be set 01:01:48.959 [2024-12-09 06:00:43.321459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:01:48.959 [2024-12-09 06:00:43.321475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.321485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:01:48.959 [2024-12-09 06:00:43.321494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.321504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:01:48.959 [2024-12-09 06:00:43.321512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.321521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:01:48.959 [2024-12-09 06:00:43.321530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.321539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04ce0 is same with the state(6) to be set 01:01:48.959 [2024-12-09 06:00:43.322416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:01:48.959 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:48.959 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:01:48.959 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:48.959 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:48.959 task offset: 16384 on job bdev=Nvme0n1 fails 01:01:48.959 01:01:48.959 Latency(us) 01:01:48.959 [2024-12-09T06:00:43.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:48.959 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:01:48.959 Job: Nvme0n1 ended in about 0.58 seconds with error 01:01:48.959 Verification LBA range: start 0x0 length 0x400 01:01:48.959 Nvme0n1 : 0.58 1995.36 124.71 110.85 0.00 29767.27 3329.44 28425.25 01:01:48.959 [2024-12-09T06:00:43.546Z] =================================================================================================================== 01:01:48.959 [2024-12-09T06:00:43.546Z] Total : 1995.36 124.71 110.85 0.00 29767.27 3329.44 28425.25 01:01:48.959 [2024-12-09 06:00:43.324192] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:01:48.959 [2024-12-09 06:00:43.324219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04ce0 (9): Bad file descriptor 01:01:48.959 [2024-12-09 06:00:43.331491] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 01:01:48.959 [2024-12-09 06:00:43.331782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:01:48.959 [2024-12-09 06:00:43.331930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:01:48.959 [2024-12-09 06:00:43.332000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 01:01:48.959 [2024-12-09 06:00:43.332012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 01:01:48.959 [2024-12-09 06:00:43.332022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:01:48.959 [2024-12-09 06:00:43.332031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a04ce0 01:01:48.959 [2024-12-09 06:00:43.332066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04ce0 (9): Bad file descriptor 01:01:48.959 [2024-12-09 06:00:43.332081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:01:48.959 [2024-12-09 06:00:43.332109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:01:48.959 [2024-12-09 06:00:43.332120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:01:48.959 [2024-12-09 06:00:43.332131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:01:48.959 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:48.959 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62271 01:01:49.895 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62271) - No such process 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:01:49.895 { 01:01:49.895 "params": { 01:01:49.895 "name": "Nvme$subsystem", 01:01:49.895 "trtype": "$TEST_TRANSPORT", 01:01:49.895 "traddr": "$NVMF_FIRST_TARGET_IP", 01:01:49.895 "adrfam": "ipv4", 01:01:49.895 "trsvcid": "$NVMF_PORT", 01:01:49.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:01:49.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:01:49.895 "hdgst": ${hdgst:-false}, 01:01:49.895 "ddgst": ${ddgst:-false} 01:01:49.895 }, 01:01:49.895 "method": "bdev_nvme_attach_controller" 01:01:49.895 } 01:01:49.895 EOF 01:01:49.895 )") 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:01:49.895 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:01:49.895 "params": { 01:01:49.895 "name": "Nvme0", 01:01:49.895 "trtype": "tcp", 01:01:49.895 "traddr": "10.0.0.3", 01:01:49.895 "adrfam": "ipv4", 01:01:49.895 "trsvcid": "4420", 01:01:49.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:01:49.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:01:49.895 "hdgst": false, 01:01:49.895 "ddgst": false 01:01:49.895 }, 01:01:49.895 "method": "bdev_nvme_attach_controller" 01:01:49.895 }' 01:01:49.895 [2024-12-09 06:00:44.402728] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:49.895 [2024-12-09 06:00:44.402794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62309 ] 01:01:50.153 [2024-12-09 06:00:44.558442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:50.153 [2024-12-09 06:00:44.598999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:50.153 [2024-12-09 06:00:44.649107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:50.411 Running I/O for 1 seconds... 01:01:51.347 1984.00 IOPS, 124.00 MiB/s 01:01:51.347 Latency(us) 01:01:51.347 [2024-12-09T06:00:45.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:51.347 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:01:51.347 Verification LBA range: start 0x0 length 0x400 01:01:51.347 Nvme0n1 : 1.01 2027.16 126.70 0.00 0.00 31099.47 3039.92 29478.04 01:01:51.347 [2024-12-09T06:00:45.934Z] =================================================================================================================== 01:01:51.347 [2024-12-09T06:00:45.934Z] Total : 2027.16 126.70 0.00 0.00 31099.47 3039.92 29478.04 01:01:51.347 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 01:01:51.347 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 01:01:51.606 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 01:01:51.606 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:01:51.606 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 01:01:51.606 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:51.606 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:51.606 rmmod nvme_tcp 01:01:51.606 rmmod nvme_fabrics 01:01:51.606 rmmod nvme_keyring 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62217 ']' 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62217 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62217 ']' 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62217 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62217 01:01:51.606 killing process with pid 62217 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62217' 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62217 01:01:51.606 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62217 01:01:51.863 [2024-12-09 06:00:46.435792] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:01:52.122 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 01:01:52.381 01:01:52.381 real 0m6.565s 01:01:52.381 user 0m22.365s 01:01:52.381 sys 0m1.849s 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:52.381 ************************************ 01:01:52.381 END TEST nvmf_host_management 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:01:52.381 ************************************ 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:01:52.381 ************************************ 01:01:52.381 START TEST nvmf_lvol 01:01:52.381 ************************************ 01:01:52.381 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:01:52.641 * Looking for test storage... 01:01:52.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:52.641 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:52.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:52.641 --rc genhtml_branch_coverage=1 01:01:52.641 --rc genhtml_function_coverage=1 01:01:52.641 --rc genhtml_legend=1 01:01:52.641 --rc geninfo_all_blocks=1 01:01:52.642 --rc geninfo_unexecuted_blocks=1 01:01:52.642 01:01:52.642 ' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:52.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:52.642 --rc genhtml_branch_coverage=1 01:01:52.642 --rc genhtml_function_coverage=1 01:01:52.642 --rc genhtml_legend=1 01:01:52.642 --rc geninfo_all_blocks=1 01:01:52.642 --rc geninfo_unexecuted_blocks=1 01:01:52.642 01:01:52.642 ' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:52.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:52.642 --rc genhtml_branch_coverage=1 01:01:52.642 --rc genhtml_function_coverage=1 01:01:52.642 --rc genhtml_legend=1 01:01:52.642 --rc geninfo_all_blocks=1 01:01:52.642 --rc geninfo_unexecuted_blocks=1 01:01:52.642 01:01:52.642 ' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:52.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:52.642 --rc genhtml_branch_coverage=1 01:01:52.642 --rc genhtml_function_coverage=1 01:01:52.642 --rc genhtml_legend=1 01:01:52.642 --rc geninfo_all_blocks=1 01:01:52.642 --rc geninfo_unexecuted_blocks=1 01:01:52.642 01:01:52.642 ' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:01:52.642 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:01:52.642 Cannot find device "nvmf_init_br" 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:01:52.642 Cannot find device "nvmf_init_br2" 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 01:01:52.642 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:01:52.642 Cannot find device "nvmf_tgt_br" 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:01:52.901 Cannot find device "nvmf_tgt_br2" 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:01:52.901 Cannot find device "nvmf_init_br" 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:01:52.901 Cannot find device "nvmf_init_br2" 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:01:52.901 Cannot find device "nvmf_tgt_br" 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:01:52.901 Cannot find device "nvmf_tgt_br2" 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:01:52.901 Cannot find device "nvmf_br" 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:01:52.901 Cannot find device "nvmf_init_if" 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:01:52.901 Cannot find device "nvmf_init_if2" 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:52.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:52.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:01:52.901 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:01:53.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:53.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.186 ms 01:01:53.160 01:01:53.160 --- 10.0.0.3 ping statistics --- 01:01:53.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:53.160 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:01:53.160 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:01:53.160 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 01:01:53.160 01:01:53.160 --- 10.0.0.4 ping statistics --- 01:01:53.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:53.160 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:53.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:53.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 01:01:53.160 01:01:53.160 --- 10.0.0.1 ping statistics --- 01:01:53.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:53.160 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:01:53.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:53.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 01:01:53.160 01:01:53.160 --- 10.0.0.2 ping statistics --- 01:01:53.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:53.160 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62580 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62580 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62580 ']' 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:53.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:53.160 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:01:53.419 [2024-12-09 06:00:47.770771] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:53.419 [2024-12-09 06:00:47.770832] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:53.419 [2024-12-09 06:00:47.907612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:01:53.419 [2024-12-09 06:00:47.948903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:53.419 [2024-12-09 06:00:47.949096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:53.419 [2024-12-09 06:00:47.949243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:53.419 [2024-12-09 06:00:47.949292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:53.419 [2024-12-09 06:00:47.949319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:53.419 [2024-12-09 06:00:47.950191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:53.419 [2024-12-09 06:00:47.950326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:53.419 [2024-12-09 06:00:47.950326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:01:53.419 [2024-12-09 06:00:47.992798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:01:54.352 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:54.352 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 01:01:54.352 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:54.352 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:54.352 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:01:54.352 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:54.352 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:01:54.352 [2024-12-09 06:00:48.882299] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:54.352 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:01:54.609 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 01:01:54.609 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:01:54.868 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 01:01:54.868 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 01:01:55.127 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 01:01:55.387 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b1637269-c957-42df-99b3-41846591c406 01:01:55.387 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1637269-c957-42df-99b3-41846591c406 lvol 20 01:01:55.646 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=38c66628-9129-4011-90fc-b882112c1b79 01:01:55.646 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:01:55.906 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 38c66628-9129-4011-90fc-b882112c1b79 01:01:55.906 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:01:56.166 [2024-12-09 06:00:50.638180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:01:56.166 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:01:56.426 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 01:01:56.426 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62650 01:01:56.426 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 01:01:57.363 06:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 38c66628-9129-4011-90fc-b882112c1b79 MY_SNAPSHOT 01:01:57.620 06:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=73b54d71-a33b-4492-93ce-045701ed06dc 01:01:57.620 06:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 38c66628-9129-4011-90fc-b882112c1b79 30 01:01:57.879 06:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 73b54d71-a33b-4492-93ce-045701ed06dc MY_CLONE 01:01:58.137 06:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=63c7319f-38c4-4a55-b76f-377d5684e0e2 01:01:58.137 06:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 63c7319f-38c4-4a55-b76f-377d5684e0e2 01:01:58.705 06:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62650 01:02:06.816 Initializing NVMe Controllers 01:02:06.816 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 01:02:06.816 Controller IO queue size 128, less than required. 01:02:06.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:02:06.816 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 01:02:06.816 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 01:02:06.816 Initialization complete. Launching workers. 01:02:06.816 ======================================================== 01:02:06.816 Latency(us) 01:02:06.816 Device Information : IOPS MiB/s Average min max 01:02:06.816 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6110.39 23.87 20954.23 1713.26 102679.91 01:02:06.816 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8530.09 33.32 15013.90 1834.48 93041.31 01:02:06.816 ======================================================== 01:02:06.816 Total : 14640.49 57.19 17493.17 1713.26 102679.91 01:02:06.816 01:02:06.816 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:02:07.075 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 38c66628-9129-4011-90fc-b882112c1b79 01:02:07.075 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1637269-c957-42df-99b3-41846591c406 01:02:07.334 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 01:02:07.335 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 01:02:07.335 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 01:02:07.335 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 01:02:07.335 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 01:02:07.335 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:02:07.335 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 01:02:07.335 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 01:02:07.335 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:02:07.335 rmmod nvme_tcp 01:02:07.335 rmmod nvme_fabrics 01:02:07.335 rmmod nvme_keyring 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62580 ']' 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62580 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62580 ']' 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62580 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62580 01:02:07.594 killing process with pid 62580 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62580' 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62580 01:02:07.594 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62580 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:02:07.854 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 01:02:08.115 ************************************ 01:02:08.115 END TEST nvmf_lvol 01:02:08.115 ************************************ 01:02:08.115 01:02:08.115 real 0m15.684s 01:02:08.115 user 1m2.619s 01:02:08.115 sys 0m4.856s 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:02:08.115 ************************************ 01:02:08.115 START TEST nvmf_lvs_grow 01:02:08.115 ************************************ 01:02:08.115 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:02:08.375 * Looking for test storage... 01:02:08.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:02:08.375 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:02:08.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:08.376 --rc genhtml_branch_coverage=1 01:02:08.376 --rc genhtml_function_coverage=1 01:02:08.376 --rc genhtml_legend=1 01:02:08.376 --rc geninfo_all_blocks=1 01:02:08.376 --rc geninfo_unexecuted_blocks=1 01:02:08.376 01:02:08.376 ' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:02:08.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:08.376 --rc genhtml_branch_coverage=1 01:02:08.376 --rc genhtml_function_coverage=1 01:02:08.376 --rc genhtml_legend=1 01:02:08.376 --rc geninfo_all_blocks=1 01:02:08.376 --rc geninfo_unexecuted_blocks=1 01:02:08.376 01:02:08.376 ' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:02:08.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:08.376 --rc genhtml_branch_coverage=1 01:02:08.376 --rc genhtml_function_coverage=1 01:02:08.376 --rc genhtml_legend=1 01:02:08.376 --rc geninfo_all_blocks=1 01:02:08.376 --rc geninfo_unexecuted_blocks=1 01:02:08.376 01:02:08.376 ' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:02:08.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:08.376 --rc genhtml_branch_coverage=1 01:02:08.376 --rc genhtml_function_coverage=1 01:02:08.376 --rc genhtml_legend=1 01:02:08.376 --rc geninfo_all_blocks=1 01:02:08.376 --rc geninfo_unexecuted_blocks=1 01:02:08.376 01:02:08.376 ' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:02:08.376 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:08.376 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:02:08.637 Cannot find device "nvmf_init_br" 01:02:08.637 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 01:02:08.637 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:02:08.637 Cannot find device "nvmf_init_br2" 01:02:08.637 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 01:02:08.637 06:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:02:08.637 Cannot find device "nvmf_tgt_br" 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:02:08.637 Cannot find device "nvmf_tgt_br2" 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:02:08.637 Cannot find device "nvmf_init_br" 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:02:08.637 Cannot find device "nvmf_init_br2" 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:02:08.637 Cannot find device "nvmf_tgt_br" 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:02:08.637 Cannot find device "nvmf_tgt_br2" 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:02:08.637 Cannot find device "nvmf_br" 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:02:08.637 Cannot find device "nvmf_init_if" 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:02:08.637 Cannot find device "nvmf_init_if2" 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:08.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:08.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:08.637 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:08.897 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:08.897 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:08.897 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:02:08.897 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:02:08.897 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:02:08.897 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:02:08.897 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:02:08.897 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:02:08.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:08.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 01:02:08.898 01:02:08.898 --- 10.0.0.3 ping statistics --- 01:02:08.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:08.898 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:02:08.898 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:02:08.898 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 01:02:08.898 01:02:08.898 --- 10.0.0.4 ping statistics --- 01:02:08.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:08.898 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:08.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:08.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 01:02:08.898 01:02:08.898 --- 10.0.0.1 ping statistics --- 01:02:08.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:08.898 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:02:08.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:08.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 01:02:08.898 01:02:08.898 --- 10.0.0.2 ping statistics --- 01:02:08.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:08.898 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:02:08.898 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63035 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63035 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63035 ']' 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:09.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:09.158 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:02:09.158 [2024-12-09 06:01:03.555604] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:09.158 [2024-12-09 06:01:03.555676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:09.158 [2024-12-09 06:01:03.707967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:09.417 [2024-12-09 06:01:03.745969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:09.417 [2024-12-09 06:01:03.746014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:09.417 [2024-12-09 06:01:03.746039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:09.417 [2024-12-09 06:01:03.746047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:09.417 [2024-12-09 06:01:03.746053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:09.417 [2024-12-09 06:01:03.746328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:09.417 [2024-12-09 06:01:03.787621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:09.986 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:09.986 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 01:02:09.986 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:09.986 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:09.986 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:02:09.986 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:09.986 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:02:10.245 [2024-12-09 06:01:04.642969] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:02:10.245 ************************************ 01:02:10.245 START TEST lvs_grow_clean 01:02:10.245 ************************************ 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:02:10.245 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:02:10.505 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:02:10.505 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:02:10.763 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6eeaf513-5708-4b87-8499-114f528cc861 01:02:10.763 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:10.763 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:02:10.763 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:02:10.763 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:02:10.763 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6eeaf513-5708-4b87-8499-114f528cc861 lvol 150 01:02:11.020 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=36536508-41f7-4111-a8ca-4b0cf59df945 01:02:11.021 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:02:11.021 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:02:11.278 [2024-12-09 06:01:05.715018] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:02:11.278 [2024-12-09 06:01:05.715071] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:02:11.278 true 01:02:11.278 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:02:11.278 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:11.536 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:02:11.537 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:02:11.794 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36536508-41f7-4111-a8ca-4b0cf59df945 01:02:11.794 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:02:12.053 [2024-12-09 06:01:06.518099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:12.053 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63112 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63112 /var/tmp/bdevperf.sock 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63112 ']' 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:12.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:12.312 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:02:12.312 [2024-12-09 06:01:06.800739] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:12.312 [2024-12-09 06:01:06.800816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63112 ] 01:02:12.570 [2024-12-09 06:01:06.950390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:12.570 [2024-12-09 06:01:06.989758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:12.570 [2024-12-09 06:01:07.031979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:13.136 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:13.136 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 01:02:13.136 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:02:13.442 Nvme0n1 01:02:13.442 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:02:13.713 [ 01:02:13.713 { 01:02:13.713 "name": "Nvme0n1", 01:02:13.713 "aliases": [ 01:02:13.713 "36536508-41f7-4111-a8ca-4b0cf59df945" 01:02:13.713 ], 01:02:13.713 "product_name": "NVMe disk", 01:02:13.713 "block_size": 4096, 01:02:13.713 "num_blocks": 38912, 01:02:13.713 "uuid": "36536508-41f7-4111-a8ca-4b0cf59df945", 01:02:13.713 "numa_id": -1, 01:02:13.713 "assigned_rate_limits": { 01:02:13.713 "rw_ios_per_sec": 0, 01:02:13.713 "rw_mbytes_per_sec": 0, 01:02:13.713 "r_mbytes_per_sec": 0, 01:02:13.713 "w_mbytes_per_sec": 0 01:02:13.713 }, 01:02:13.713 "claimed": false, 01:02:13.713 "zoned": false, 01:02:13.713 "supported_io_types": { 01:02:13.713 "read": true, 01:02:13.713 "write": true, 01:02:13.713 "unmap": true, 01:02:13.713 "flush": true, 01:02:13.713 "reset": true, 01:02:13.713 "nvme_admin": true, 01:02:13.713 "nvme_io": true, 01:02:13.713 "nvme_io_md": false, 01:02:13.713 "write_zeroes": true, 01:02:13.713 "zcopy": false, 01:02:13.713 "get_zone_info": false, 01:02:13.713 "zone_management": false, 01:02:13.713 "zone_append": false, 01:02:13.713 "compare": true, 01:02:13.713 "compare_and_write": true, 01:02:13.713 "abort": true, 01:02:13.713 "seek_hole": false, 01:02:13.713 "seek_data": false, 01:02:13.713 "copy": true, 01:02:13.713 "nvme_iov_md": false 01:02:13.713 }, 01:02:13.713 "memory_domains": [ 01:02:13.713 { 01:02:13.713 "dma_device_id": "system", 01:02:13.713 "dma_device_type": 1 01:02:13.713 } 01:02:13.713 ], 01:02:13.713 "driver_specific": { 01:02:13.713 "nvme": [ 01:02:13.713 { 01:02:13.713 "trid": { 01:02:13.713 "trtype": "TCP", 01:02:13.713 "adrfam": "IPv4", 01:02:13.713 "traddr": "10.0.0.3", 01:02:13.713 "trsvcid": "4420", 01:02:13.713 "subnqn": "nqn.2016-06.io.spdk:cnode0" 01:02:13.713 }, 01:02:13.713 "ctrlr_data": { 01:02:13.713 "cntlid": 1, 01:02:13.713 "vendor_id": "0x8086", 01:02:13.713 "model_number": "SPDK bdev Controller", 01:02:13.713 "serial_number": "SPDK0", 01:02:13.713 "firmware_revision": "25.01", 01:02:13.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:02:13.713 "oacs": { 01:02:13.713 "security": 0, 01:02:13.713 "format": 0, 01:02:13.713 "firmware": 0, 01:02:13.713 "ns_manage": 0 01:02:13.713 }, 01:02:13.713 "multi_ctrlr": true, 01:02:13.713 "ana_reporting": false 01:02:13.713 }, 01:02:13.713 "vs": { 01:02:13.713 "nvme_version": "1.3" 01:02:13.713 }, 01:02:13.713 "ns_data": { 01:02:13.713 "id": 1, 01:02:13.713 "can_share": true 01:02:13.713 } 01:02:13.713 } 01:02:13.713 ], 01:02:13.713 "mp_policy": "active_passive" 01:02:13.713 } 01:02:13.713 } 01:02:13.713 ] 01:02:13.713 06:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:02:13.713 06:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63130 01:02:13.713 06:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:02:13.713 Running I/O for 10 seconds... 01:02:14.658 Latency(us) 01:02:14.658 [2024-12-09T06:01:09.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:14.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:14.658 Nvme0n1 : 1.00 10400.00 40.62 0.00 0.00 0.00 0.00 0.00 01:02:14.658 [2024-12-09T06:01:09.245Z] =================================================================================================================== 01:02:14.658 [2024-12-09T06:01:09.245Z] Total : 10400.00 40.62 0.00 0.00 0.00 0.00 0.00 01:02:14.658 01:02:15.593 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:15.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:15.593 Nvme0n1 : 2.00 10464.50 40.88 0.00 0.00 0.00 0.00 0.00 01:02:15.593 [2024-12-09T06:01:10.180Z] =================================================================================================================== 01:02:15.593 [2024-12-09T06:01:10.180Z] Total : 10464.50 40.88 0.00 0.00 0.00 0.00 0.00 01:02:15.593 01:02:15.852 true 01:02:15.852 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:15.852 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:02:16.110 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:02:16.110 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:02:16.110 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63130 01:02:16.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:16.678 Nvme0n1 : 3.00 10312.00 40.28 0.00 0.00 0.00 0.00 0.00 01:02:16.678 [2024-12-09T06:01:11.265Z] =================================================================================================================== 01:02:16.678 [2024-12-09T06:01:11.265Z] Total : 10312.00 40.28 0.00 0.00 0.00 0.00 0.00 01:02:16.678 01:02:17.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:17.614 Nvme0n1 : 4.00 10327.75 40.34 0.00 0.00 0.00 0.00 0.00 01:02:17.614 [2024-12-09T06:01:12.201Z] =================================================================================================================== 01:02:17.614 [2024-12-09T06:01:12.201Z] Total : 10327.75 40.34 0.00 0.00 0.00 0.00 0.00 01:02:17.614 01:02:19.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:19.004 Nvme0n1 : 5.00 10290.00 40.20 0.00 0.00 0.00 0.00 0.00 01:02:19.004 [2024-12-09T06:01:13.591Z] =================================================================================================================== 01:02:19.004 [2024-12-09T06:01:13.591Z] Total : 10290.00 40.20 0.00 0.00 0.00 0.00 0.00 01:02:19.004 01:02:19.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:19.940 Nvme0n1 : 6.00 10238.67 39.99 0.00 0.00 0.00 0.00 0.00 01:02:19.940 [2024-12-09T06:01:14.527Z] =================================================================================================================== 01:02:19.940 [2024-12-09T06:01:14.527Z] Total : 10238.67 39.99 0.00 0.00 0.00 0.00 0.00 01:02:19.940 01:02:20.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:20.876 Nvme0n1 : 7.00 10173.00 39.74 0.00 0.00 0.00 0.00 0.00 01:02:20.876 [2024-12-09T06:01:15.463Z] =================================================================================================================== 01:02:20.876 [2024-12-09T06:01:15.463Z] Total : 10173.00 39.74 0.00 0.00 0.00 0.00 0.00 01:02:20.876 01:02:21.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:21.812 Nvme0n1 : 8.00 10121.00 39.54 0.00 0.00 0.00 0.00 0.00 01:02:21.812 [2024-12-09T06:01:16.399Z] =================================================================================================================== 01:02:21.812 [2024-12-09T06:01:16.399Z] Total : 10121.00 39.54 0.00 0.00 0.00 0.00 0.00 01:02:21.812 01:02:22.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:22.745 Nvme0n1 : 9.00 10082.44 39.38 0.00 0.00 0.00 0.00 0.00 01:02:22.745 [2024-12-09T06:01:17.332Z] =================================================================================================================== 01:02:22.745 [2024-12-09T06:01:17.332Z] Total : 10082.44 39.38 0.00 0.00 0.00 0.00 0.00 01:02:22.745 01:02:23.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:23.680 Nvme0n1 : 10.00 10046.50 39.24 0.00 0.00 0.00 0.00 0.00 01:02:23.680 [2024-12-09T06:01:18.267Z] =================================================================================================================== 01:02:23.680 [2024-12-09T06:01:18.267Z] Total : 10046.50 39.24 0.00 0.00 0.00 0.00 0.00 01:02:23.680 01:02:23.680 01:02:23.680 Latency(us) 01:02:23.680 [2024-12-09T06:01:18.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:23.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:23.680 Nvme0n1 : 10.01 10048.35 39.25 0.00 0.00 12734.74 4842.82 49481.00 01:02:23.680 [2024-12-09T06:01:18.267Z] =================================================================================================================== 01:02:23.680 [2024-12-09T06:01:18.267Z] Total : 10048.35 39.25 0.00 0.00 12734.74 4842.82 49481.00 01:02:23.680 { 01:02:23.680 "results": [ 01:02:23.681 { 01:02:23.681 "job": "Nvme0n1", 01:02:23.681 "core_mask": "0x2", 01:02:23.681 "workload": "randwrite", 01:02:23.681 "status": "finished", 01:02:23.681 "queue_depth": 128, 01:02:23.681 "io_size": 4096, 01:02:23.681 "runtime": 10.010895, 01:02:23.681 "iops": 10048.3523201472, 01:02:23.681 "mibps": 39.251376250575, 01:02:23.681 "io_failed": 0, 01:02:23.681 "io_timeout": 0, 01:02:23.681 "avg_latency_us": 12734.741995931994, 01:02:23.681 "min_latency_us": 4842.820883534137, 01:02:23.681 "max_latency_us": 49480.99598393574 01:02:23.681 } 01:02:23.681 ], 01:02:23.681 "core_count": 1 01:02:23.681 } 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63112 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63112 ']' 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63112 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63112 01:02:23.681 killing process with pid 63112 01:02:23.681 Received shutdown signal, test time was about 10.000000 seconds 01:02:23.681 01:02:23.681 Latency(us) 01:02:23.681 [2024-12-09T06:01:18.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:23.681 [2024-12-09T06:01:18.268Z] =================================================================================================================== 01:02:23.681 [2024-12-09T06:01:18.268Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63112' 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63112 01:02:23.681 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63112 01:02:23.939 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:02:24.197 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:02:24.456 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:24.456 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:02:24.718 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:02:24.718 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 01:02:24.718 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:02:24.718 [2024-12-09 06:01:19.282210] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:24.975 request: 01:02:24.975 { 01:02:24.975 "uuid": "6eeaf513-5708-4b87-8499-114f528cc861", 01:02:24.975 "method": "bdev_lvol_get_lvstores", 01:02:24.975 "req_id": 1 01:02:24.975 } 01:02:24.975 Got JSON-RPC error response 01:02:24.975 response: 01:02:24.975 { 01:02:24.975 "code": -19, 01:02:24.975 "message": "No such device" 01:02:24.975 } 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:24.975 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:02:25.233 aio_bdev 01:02:25.233 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 36536508-41f7-4111-a8ca-4b0cf59df945 01:02:25.233 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=36536508-41f7-4111-a8ca-4b0cf59df945 01:02:25.233 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:02:25.233 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 01:02:25.233 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:02:25.233 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:02:25.233 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:02:25.491 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36536508-41f7-4111-a8ca-4b0cf59df945 -t 2000 01:02:25.748 [ 01:02:25.748 { 01:02:25.748 "name": "36536508-41f7-4111-a8ca-4b0cf59df945", 01:02:25.748 "aliases": [ 01:02:25.748 "lvs/lvol" 01:02:25.748 ], 01:02:25.748 "product_name": "Logical Volume", 01:02:25.748 "block_size": 4096, 01:02:25.748 "num_blocks": 38912, 01:02:25.748 "uuid": "36536508-41f7-4111-a8ca-4b0cf59df945", 01:02:25.748 "assigned_rate_limits": { 01:02:25.748 "rw_ios_per_sec": 0, 01:02:25.748 "rw_mbytes_per_sec": 0, 01:02:25.748 "r_mbytes_per_sec": 0, 01:02:25.748 "w_mbytes_per_sec": 0 01:02:25.748 }, 01:02:25.748 "claimed": false, 01:02:25.748 "zoned": false, 01:02:25.748 "supported_io_types": { 01:02:25.748 "read": true, 01:02:25.748 "write": true, 01:02:25.748 "unmap": true, 01:02:25.748 "flush": false, 01:02:25.748 "reset": true, 01:02:25.748 "nvme_admin": false, 01:02:25.748 "nvme_io": false, 01:02:25.748 "nvme_io_md": false, 01:02:25.748 "write_zeroes": true, 01:02:25.748 "zcopy": false, 01:02:25.748 "get_zone_info": false, 01:02:25.748 "zone_management": false, 01:02:25.748 "zone_append": false, 01:02:25.748 "compare": false, 01:02:25.748 "compare_and_write": false, 01:02:25.748 "abort": false, 01:02:25.748 "seek_hole": true, 01:02:25.748 "seek_data": true, 01:02:25.748 "copy": false, 01:02:25.748 "nvme_iov_md": false 01:02:25.748 }, 01:02:25.748 "driver_specific": { 01:02:25.748 "lvol": { 01:02:25.748 "lvol_store_uuid": "6eeaf513-5708-4b87-8499-114f528cc861", 01:02:25.748 "base_bdev": "aio_bdev", 01:02:25.748 "thin_provision": false, 01:02:25.748 "num_allocated_clusters": 38, 01:02:25.748 "snapshot": false, 01:02:25.748 "clone": false, 01:02:25.748 "esnap_clone": false 01:02:25.748 } 01:02:25.748 } 01:02:25.748 } 01:02:25.748 ] 01:02:25.749 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 01:02:25.749 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:25.749 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:02:25.749 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:02:25.749 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:25.749 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:02:26.006 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:02:26.006 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 36536508-41f7-4111-a8ca-4b0cf59df945 01:02:26.264 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6eeaf513-5708-4b87-8499-114f528cc861 01:02:26.522 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:02:26.780 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:02:27.347 ************************************ 01:02:27.347 END TEST lvs_grow_clean 01:02:27.347 ************************************ 01:02:27.347 01:02:27.347 real 0m16.953s 01:02:27.347 user 0m14.790s 01:02:27.347 sys 0m3.255s 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:02:27.347 ************************************ 01:02:27.347 START TEST lvs_grow_dirty 01:02:27.347 ************************************ 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:02:27.347 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:02:27.606 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:02:27.606 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:02:27.606 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:27.606 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:27.606 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:02:27.865 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:02:27.865 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:02:27.865 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dbe84898-76ec-4485-b50f-90d2c8f7a353 lvol 150 01:02:28.123 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7101bdf0-3497-4479-bd9e-3225cc4f34bd 01:02:28.123 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:02:28.123 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:02:28.382 [2024-12-09 06:01:22.750944] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:02:28.382 [2024-12-09 06:01:22.751002] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:02:28.382 true 01:02:28.382 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:28.382 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:02:28.641 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:02:28.641 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:02:28.641 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7101bdf0-3497-4479-bd9e-3225cc4f34bd 01:02:28.899 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:02:29.157 [2024-12-09 06:01:23.562622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:29.157 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:02:29.415 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63372 01:02:29.415 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:02:29.415 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:02:29.415 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63372 /var/tmp/bdevperf.sock 01:02:29.415 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63372 ']' 01:02:29.415 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:29.415 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:29.416 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:29.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:29.416 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:29.416 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:02:29.416 [2024-12-09 06:01:23.859898] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:29.416 [2024-12-09 06:01:23.860161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63372 ] 01:02:29.674 [2024-12-09 06:01:24.013571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:29.674 [2024-12-09 06:01:24.072172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:29.674 [2024-12-09 06:01:24.148574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:30.240 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:30.240 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:02:30.240 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:02:30.498 Nvme0n1 01:02:30.498 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:02:30.756 [ 01:02:30.756 { 01:02:30.756 "name": "Nvme0n1", 01:02:30.756 "aliases": [ 01:02:30.756 "7101bdf0-3497-4479-bd9e-3225cc4f34bd" 01:02:30.756 ], 01:02:30.756 "product_name": "NVMe disk", 01:02:30.756 "block_size": 4096, 01:02:30.756 "num_blocks": 38912, 01:02:30.756 "uuid": "7101bdf0-3497-4479-bd9e-3225cc4f34bd", 01:02:30.756 "numa_id": -1, 01:02:30.756 "assigned_rate_limits": { 01:02:30.756 "rw_ios_per_sec": 0, 01:02:30.756 "rw_mbytes_per_sec": 0, 01:02:30.756 "r_mbytes_per_sec": 0, 01:02:30.756 "w_mbytes_per_sec": 0 01:02:30.756 }, 01:02:30.756 "claimed": false, 01:02:30.756 "zoned": false, 01:02:30.756 "supported_io_types": { 01:02:30.756 "read": true, 01:02:30.756 "write": true, 01:02:30.756 "unmap": true, 01:02:30.756 "flush": true, 01:02:30.756 "reset": true, 01:02:30.756 "nvme_admin": true, 01:02:30.756 "nvme_io": true, 01:02:30.756 "nvme_io_md": false, 01:02:30.756 "write_zeroes": true, 01:02:30.756 "zcopy": false, 01:02:30.756 "get_zone_info": false, 01:02:30.756 "zone_management": false, 01:02:30.756 "zone_append": false, 01:02:30.756 "compare": true, 01:02:30.756 "compare_and_write": true, 01:02:30.756 "abort": true, 01:02:30.756 "seek_hole": false, 01:02:30.756 "seek_data": false, 01:02:30.756 "copy": true, 01:02:30.756 "nvme_iov_md": false 01:02:30.756 }, 01:02:30.756 "memory_domains": [ 01:02:30.756 { 01:02:30.756 "dma_device_id": "system", 01:02:30.756 "dma_device_type": 1 01:02:30.756 } 01:02:30.756 ], 01:02:30.756 "driver_specific": { 01:02:30.756 "nvme": [ 01:02:30.756 { 01:02:30.756 "trid": { 01:02:30.756 "trtype": "TCP", 01:02:30.756 "adrfam": "IPv4", 01:02:30.756 "traddr": "10.0.0.3", 01:02:30.756 "trsvcid": "4420", 01:02:30.756 "subnqn": "nqn.2016-06.io.spdk:cnode0" 01:02:30.756 }, 01:02:30.756 "ctrlr_data": { 01:02:30.756 "cntlid": 1, 01:02:30.756 "vendor_id": "0x8086", 01:02:30.756 "model_number": "SPDK bdev Controller", 01:02:30.756 "serial_number": "SPDK0", 01:02:30.756 "firmware_revision": "25.01", 01:02:30.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:02:30.756 "oacs": { 01:02:30.756 "security": 0, 01:02:30.756 "format": 0, 01:02:30.756 "firmware": 0, 01:02:30.756 "ns_manage": 0 01:02:30.756 }, 01:02:30.756 "multi_ctrlr": true, 01:02:30.756 "ana_reporting": false 01:02:30.756 }, 01:02:30.756 "vs": { 01:02:30.756 "nvme_version": "1.3" 01:02:30.756 }, 01:02:30.756 "ns_data": { 01:02:30.756 "id": 1, 01:02:30.756 "can_share": true 01:02:30.756 } 01:02:30.756 } 01:02:30.756 ], 01:02:30.756 "mp_policy": "active_passive" 01:02:30.756 } 01:02:30.756 } 01:02:30.756 ] 01:02:30.756 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:02:30.756 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63390 01:02:30.756 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:02:30.756 Running I/O for 10 seconds... 01:02:31.691 Latency(us) 01:02:31.691 [2024-12-09T06:01:26.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:31.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:31.691 Nvme0n1 : 1.00 10068.00 39.33 0.00 0.00 0.00 0.00 0.00 01:02:31.691 [2024-12-09T06:01:26.278Z] =================================================================================================================== 01:02:31.691 [2024-12-09T06:01:26.278Z] Total : 10068.00 39.33 0.00 0.00 0.00 0.00 0.00 01:02:31.691 01:02:32.629 06:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:32.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:32.629 Nvme0n1 : 2.00 10304.50 40.25 0.00 0.00 0.00 0.00 0.00 01:02:32.629 [2024-12-09T06:01:27.216Z] =================================================================================================================== 01:02:32.629 [2024-12-09T06:01:27.216Z] Total : 10304.50 40.25 0.00 0.00 0.00 0.00 0.00 01:02:32.629 01:02:32.888 true 01:02:32.888 06:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:32.888 06:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:02:33.148 06:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:02:33.148 06:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:02:33.148 06:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63390 01:02:33.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:33.716 Nvme0n1 : 3.00 10319.67 40.31 0.00 0.00 0.00 0.00 0.00 01:02:33.716 [2024-12-09T06:01:28.303Z] =================================================================================================================== 01:02:33.716 [2024-12-09T06:01:28.303Z] Total : 10319.67 40.31 0.00 0.00 0.00 0.00 0.00 01:02:33.716 01:02:34.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:34.654 Nvme0n1 : 4.00 10269.50 40.12 0.00 0.00 0.00 0.00 0.00 01:02:34.654 [2024-12-09T06:01:29.241Z] =================================================================================================================== 01:02:34.654 [2024-12-09T06:01:29.241Z] Total : 10269.50 40.12 0.00 0.00 0.00 0.00 0.00 01:02:34.654 01:02:36.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:36.032 Nvme0n1 : 5.00 10222.00 39.93 0.00 0.00 0.00 0.00 0.00 01:02:36.032 [2024-12-09T06:01:30.619Z] =================================================================================================================== 01:02:36.032 [2024-12-09T06:01:30.619Z] Total : 10222.00 39.93 0.00 0.00 0.00 0.00 0.00 01:02:36.032 01:02:36.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:36.967 Nvme0n1 : 6.00 10158.00 39.68 0.00 0.00 0.00 0.00 0.00 01:02:36.967 [2024-12-09T06:01:31.554Z] =================================================================================================================== 01:02:36.967 [2024-12-09T06:01:31.554Z] Total : 10158.00 39.68 0.00 0.00 0.00 0.00 0.00 01:02:36.967 01:02:37.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:37.902 Nvme0n1 : 7.00 10119.29 39.53 0.00 0.00 0.00 0.00 0.00 01:02:37.902 [2024-12-09T06:01:32.489Z] =================================================================================================================== 01:02:37.902 [2024-12-09T06:01:32.489Z] Total : 10119.29 39.53 0.00 0.00 0.00 0.00 0.00 01:02:37.902 01:02:38.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:38.837 Nvme0n1 : 8.00 9716.00 37.95 0.00 0.00 0.00 0.00 0.00 01:02:38.837 [2024-12-09T06:01:33.424Z] =================================================================================================================== 01:02:38.837 [2024-12-09T06:01:33.424Z] Total : 9716.00 37.95 0.00 0.00 0.00 0.00 0.00 01:02:38.837 01:02:39.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:39.771 Nvme0n1 : 9.00 9654.22 37.71 0.00 0.00 0.00 0.00 0.00 01:02:39.771 [2024-12-09T06:01:34.358Z] =================================================================================================================== 01:02:39.771 [2024-12-09T06:01:34.358Z] Total : 9654.22 37.71 0.00 0.00 0.00 0.00 0.00 01:02:39.771 01:02:40.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:40.763 Nvme0n1 : 10.00 9652.90 37.71 0.00 0.00 0.00 0.00 0.00 01:02:40.763 [2024-12-09T06:01:35.350Z] =================================================================================================================== 01:02:40.763 [2024-12-09T06:01:35.350Z] Total : 9652.90 37.71 0.00 0.00 0.00 0.00 0.00 01:02:40.763 01:02:40.763 01:02:40.763 Latency(us) 01:02:40.763 [2024-12-09T06:01:35.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:40.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:02:40.763 Nvme0n1 : 10.01 9653.35 37.71 0.00 0.00 13255.63 3816.35 318362.83 01:02:40.763 [2024-12-09T06:01:35.350Z] =================================================================================================================== 01:02:40.763 [2024-12-09T06:01:35.350Z] Total : 9653.35 37.71 0.00 0.00 13255.63 3816.35 318362.83 01:02:40.763 { 01:02:40.763 "results": [ 01:02:40.763 { 01:02:40.763 "job": "Nvme0n1", 01:02:40.763 "core_mask": "0x2", 01:02:40.763 "workload": "randwrite", 01:02:40.763 "status": "finished", 01:02:40.763 "queue_depth": 128, 01:02:40.763 "io_size": 4096, 01:02:40.763 "runtime": 10.012793, 01:02:40.763 "iops": 9653.350468745333, 01:02:40.763 "mibps": 37.70840026853646, 01:02:40.763 "io_failed": 0, 01:02:40.763 "io_timeout": 0, 01:02:40.763 "avg_latency_us": 13255.628153060425, 01:02:40.763 "min_latency_us": 3816.3534136546186, 01:02:40.763 "max_latency_us": 318362.83373493975 01:02:40.763 } 01:02:40.763 ], 01:02:40.763 "core_count": 1 01:02:40.763 } 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63372 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63372 ']' 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63372 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63372 01:02:40.763 killing process with pid 63372 01:02:40.763 Received shutdown signal, test time was about 10.000000 seconds 01:02:40.763 01:02:40.763 Latency(us) 01:02:40.763 [2024-12-09T06:01:35.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:40.763 [2024-12-09T06:01:35.350Z] =================================================================================================================== 01:02:40.763 [2024-12-09T06:01:35.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63372' 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63372 01:02:40.763 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63372 01:02:41.022 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:02:41.280 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:02:41.538 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:41.538 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63035 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63035 01:02:41.797 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63035 Killed "${NVMF_APP[@]}" "$@" 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63528 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63528 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63528 ']' 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:41.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:41.797 06:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:02:41.797 [2024-12-09 06:01:36.300895] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:41.797 [2024-12-09 06:01:36.301255] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:42.055 [2024-12-09 06:01:36.454206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:42.055 [2024-12-09 06:01:36.492059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:42.055 [2024-12-09 06:01:36.492108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:42.055 [2024-12-09 06:01:36.492117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:42.055 [2024-12-09 06:01:36.492125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:42.055 [2024-12-09 06:01:36.492132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:42.055 [2024-12-09 06:01:36.492373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:42.055 [2024-12-09 06:01:36.534605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:42.623 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:42.623 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:02:42.623 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:42.623 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:42.623 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:02:42.882 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:42.882 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:02:42.882 [2024-12-09 06:01:37.412555] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 01:02:42.882 [2024-12-09 06:01:37.412995] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:02:42.882 [2024-12-09 06:01:37.413264] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:02:43.140 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 01:02:43.140 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7101bdf0-3497-4479-bd9e-3225cc4f34bd 01:02:43.140 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7101bdf0-3497-4479-bd9e-3225cc4f34bd 01:02:43.140 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:02:43.140 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:02:43.140 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:02:43.140 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:02:43.140 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:02:43.140 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7101bdf0-3497-4479-bd9e-3225cc4f34bd -t 2000 01:02:43.399 [ 01:02:43.399 { 01:02:43.399 "name": "7101bdf0-3497-4479-bd9e-3225cc4f34bd", 01:02:43.399 "aliases": [ 01:02:43.399 "lvs/lvol" 01:02:43.399 ], 01:02:43.399 "product_name": "Logical Volume", 01:02:43.399 "block_size": 4096, 01:02:43.399 "num_blocks": 38912, 01:02:43.399 "uuid": "7101bdf0-3497-4479-bd9e-3225cc4f34bd", 01:02:43.399 "assigned_rate_limits": { 01:02:43.399 "rw_ios_per_sec": 0, 01:02:43.399 "rw_mbytes_per_sec": 0, 01:02:43.399 "r_mbytes_per_sec": 0, 01:02:43.399 "w_mbytes_per_sec": 0 01:02:43.399 }, 01:02:43.399 "claimed": false, 01:02:43.399 "zoned": false, 01:02:43.399 "supported_io_types": { 01:02:43.399 "read": true, 01:02:43.399 "write": true, 01:02:43.399 "unmap": true, 01:02:43.399 "flush": false, 01:02:43.399 "reset": true, 01:02:43.399 "nvme_admin": false, 01:02:43.399 "nvme_io": false, 01:02:43.399 "nvme_io_md": false, 01:02:43.399 "write_zeroes": true, 01:02:43.399 "zcopy": false, 01:02:43.399 "get_zone_info": false, 01:02:43.399 "zone_management": false, 01:02:43.399 "zone_append": false, 01:02:43.399 "compare": false, 01:02:43.399 "compare_and_write": false, 01:02:43.399 "abort": false, 01:02:43.399 "seek_hole": true, 01:02:43.399 "seek_data": true, 01:02:43.399 "copy": false, 01:02:43.399 "nvme_iov_md": false 01:02:43.399 }, 01:02:43.399 "driver_specific": { 01:02:43.399 "lvol": { 01:02:43.399 "lvol_store_uuid": "dbe84898-76ec-4485-b50f-90d2c8f7a353", 01:02:43.399 "base_bdev": "aio_bdev", 01:02:43.399 "thin_provision": false, 01:02:43.399 "num_allocated_clusters": 38, 01:02:43.399 "snapshot": false, 01:02:43.399 "clone": false, 01:02:43.399 "esnap_clone": false 01:02:43.399 } 01:02:43.399 } 01:02:43.399 } 01:02:43.399 ] 01:02:43.399 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:02:43.399 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:43.399 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 01:02:43.658 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 01:02:43.658 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 01:02:43.658 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:02:43.917 [2024-12-09 06:01:38.448593] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:02:43.917 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:44.176 request: 01:02:44.176 { 01:02:44.176 "uuid": "dbe84898-76ec-4485-b50f-90d2c8f7a353", 01:02:44.176 "method": "bdev_lvol_get_lvstores", 01:02:44.176 "req_id": 1 01:02:44.176 } 01:02:44.176 Got JSON-RPC error response 01:02:44.176 response: 01:02:44.176 { 01:02:44.176 "code": -19, 01:02:44.176 "message": "No such device" 01:02:44.176 } 01:02:44.176 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 01:02:44.176 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:44.176 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:44.176 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:44.176 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:02:44.435 aio_bdev 01:02:44.435 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7101bdf0-3497-4479-bd9e-3225cc4f34bd 01:02:44.435 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7101bdf0-3497-4479-bd9e-3225cc4f34bd 01:02:44.435 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:02:44.435 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:02:44.435 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:02:44.435 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:02:44.435 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:02:44.694 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7101bdf0-3497-4479-bd9e-3225cc4f34bd -t 2000 01:02:44.694 [ 01:02:44.694 { 01:02:44.694 "name": "7101bdf0-3497-4479-bd9e-3225cc4f34bd", 01:02:44.694 "aliases": [ 01:02:44.694 "lvs/lvol" 01:02:44.694 ], 01:02:44.694 "product_name": "Logical Volume", 01:02:44.694 "block_size": 4096, 01:02:44.694 "num_blocks": 38912, 01:02:44.694 "uuid": "7101bdf0-3497-4479-bd9e-3225cc4f34bd", 01:02:44.694 "assigned_rate_limits": { 01:02:44.694 "rw_ios_per_sec": 0, 01:02:44.694 "rw_mbytes_per_sec": 0, 01:02:44.694 "r_mbytes_per_sec": 0, 01:02:44.694 "w_mbytes_per_sec": 0 01:02:44.694 }, 01:02:44.694 "claimed": false, 01:02:44.694 "zoned": false, 01:02:44.694 "supported_io_types": { 01:02:44.694 "read": true, 01:02:44.694 "write": true, 01:02:44.694 "unmap": true, 01:02:44.694 "flush": false, 01:02:44.694 "reset": true, 01:02:44.694 "nvme_admin": false, 01:02:44.694 "nvme_io": false, 01:02:44.694 "nvme_io_md": false, 01:02:44.694 "write_zeroes": true, 01:02:44.694 "zcopy": false, 01:02:44.694 "get_zone_info": false, 01:02:44.694 "zone_management": false, 01:02:44.694 "zone_append": false, 01:02:44.694 "compare": false, 01:02:44.694 "compare_and_write": false, 01:02:44.694 "abort": false, 01:02:44.694 "seek_hole": true, 01:02:44.694 "seek_data": true, 01:02:44.694 "copy": false, 01:02:44.694 "nvme_iov_md": false 01:02:44.694 }, 01:02:44.694 "driver_specific": { 01:02:44.694 "lvol": { 01:02:44.694 "lvol_store_uuid": "dbe84898-76ec-4485-b50f-90d2c8f7a353", 01:02:44.694 "base_bdev": "aio_bdev", 01:02:44.694 "thin_provision": false, 01:02:44.694 "num_allocated_clusters": 38, 01:02:44.694 "snapshot": false, 01:02:44.694 "clone": false, 01:02:44.694 "esnap_clone": false 01:02:44.694 } 01:02:44.694 } 01:02:44.694 } 01:02:44.694 ] 01:02:44.694 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:02:44.694 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:44.952 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:02:44.952 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:02:44.952 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:44.952 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:02:45.211 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:02:45.211 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7101bdf0-3497-4479-bd9e-3225cc4f34bd 01:02:45.469 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dbe84898-76ec-4485-b50f-90d2c8f7a353 01:02:45.728 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:02:45.728 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:02:46.295 ************************************ 01:02:46.295 END TEST lvs_grow_dirty 01:02:46.295 ************************************ 01:02:46.295 01:02:46.295 real 0m19.079s 01:02:46.295 user 0m37.617s 01:02:46.295 sys 0m7.868s 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 01:02:46.295 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:02:46.295 nvmf_trace.0 01:02:46.553 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 01:02:46.553 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 01:02:46.553 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 01:02:46.553 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:02:46.811 rmmod nvme_tcp 01:02:46.811 rmmod nvme_fabrics 01:02:46.811 rmmod nvme_keyring 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63528 ']' 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63528 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63528 ']' 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63528 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:46.811 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63528 01:02:47.070 killing process with pid 63528 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63528' 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63528 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63528 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:02:47.070 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 01:02:47.334 ************************************ 01:02:47.334 END TEST nvmf_lvs_grow 01:02:47.334 ************************************ 01:02:47.334 01:02:47.334 real 0m39.214s 01:02:47.334 user 0m58.363s 01:02:47.334 sys 0m12.367s 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:47.334 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:02:47.593 06:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:02:47.593 06:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:02:47.593 06:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:47.593 06:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:02:47.593 ************************************ 01:02:47.593 START TEST nvmf_bdev_io_wait 01:02:47.593 ************************************ 01:02:47.593 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:02:47.593 * Looking for test storage... 01:02:47.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:02:47.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:47.593 --rc genhtml_branch_coverage=1 01:02:47.593 --rc genhtml_function_coverage=1 01:02:47.593 --rc genhtml_legend=1 01:02:47.593 --rc geninfo_all_blocks=1 01:02:47.593 --rc geninfo_unexecuted_blocks=1 01:02:47.593 01:02:47.593 ' 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:02:47.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:47.593 --rc genhtml_branch_coverage=1 01:02:47.593 --rc genhtml_function_coverage=1 01:02:47.593 --rc genhtml_legend=1 01:02:47.593 --rc geninfo_all_blocks=1 01:02:47.593 --rc geninfo_unexecuted_blocks=1 01:02:47.593 01:02:47.593 ' 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:02:47.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:47.593 --rc genhtml_branch_coverage=1 01:02:47.593 --rc genhtml_function_coverage=1 01:02:47.593 --rc genhtml_legend=1 01:02:47.593 --rc geninfo_all_blocks=1 01:02:47.593 --rc geninfo_unexecuted_blocks=1 01:02:47.593 01:02:47.593 ' 01:02:47.593 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:02:47.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:47.593 --rc genhtml_branch_coverage=1 01:02:47.594 --rc genhtml_function_coverage=1 01:02:47.594 --rc genhtml_legend=1 01:02:47.594 --rc geninfo_all_blocks=1 01:02:47.594 --rc geninfo_unexecuted_blocks=1 01:02:47.594 01:02:47.594 ' 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:47.594 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:47.853 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:02:47.854 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:02:47.854 Cannot find device "nvmf_init_br" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:02:47.854 Cannot find device "nvmf_init_br2" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:02:47.854 Cannot find device "nvmf_tgt_br" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:02:47.854 Cannot find device "nvmf_tgt_br2" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:02:47.854 Cannot find device "nvmf_init_br" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:02:47.854 Cannot find device "nvmf_init_br2" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:02:47.854 Cannot find device "nvmf_tgt_br" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:02:47.854 Cannot find device "nvmf_tgt_br2" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:02:47.854 Cannot find device "nvmf_br" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:02:47.854 Cannot find device "nvmf_init_if" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:02:47.854 Cannot find device "nvmf_init_if2" 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:47.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:47.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 01:02:47.854 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:02:48.113 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:02:48.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:48.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 01:02:48.114 01:02:48.114 --- 10.0.0.3 ping statistics --- 01:02:48.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:48.114 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:02:48.114 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:02:48.114 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 01:02:48.114 01:02:48.114 --- 10.0.0.4 ping statistics --- 01:02:48.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:48.114 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 01:02:48.114 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:48.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:48.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 01:02:48.372 01:02:48.372 --- 10.0.0.1 ping statistics --- 01:02:48.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:48.372 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:02:48.372 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:02:48.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:48.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 01:02:48.372 01:02:48.372 --- 10.0.0.2 ping statistics --- 01:02:48.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:48.372 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 01:02:48.372 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:48.372 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 01:02:48.372 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:02:48.372 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:48.372 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:02:48.372 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:02:48.372 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:48.372 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63891 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63891 01:02:48.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63891 ']' 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:48.373 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:48.373 [2024-12-09 06:01:42.810066] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:48.373 [2024-12-09 06:01:42.810304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:48.631 [2024-12-09 06:01:42.963655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:02:48.631 [2024-12-09 06:01:43.004534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:48.631 [2024-12-09 06:01:43.004769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:48.631 [2024-12-09 06:01:43.004785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:48.631 [2024-12-09 06:01:43.004793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:48.631 [2024-12-09 06:01:43.004800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:48.631 [2024-12-09 06:01:43.005704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:48.631 [2024-12-09 06:01:43.005900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:48.631 [2024-12-09 06:01:43.006015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:48.631 [2024-12-09 06:01:43.006524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:49.197 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:49.197 [2024-12-09 06:01:43.778831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:49.456 [2024-12-09 06:01:43.794043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:49.456 Malloc0 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:49.456 [2024-12-09 06:01:43.856633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63926 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63928 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:02:49.456 { 01:02:49.456 "params": { 01:02:49.456 "name": "Nvme$subsystem", 01:02:49.456 "trtype": "$TEST_TRANSPORT", 01:02:49.456 "traddr": "$NVMF_FIRST_TARGET_IP", 01:02:49.456 "adrfam": "ipv4", 01:02:49.456 "trsvcid": "$NVMF_PORT", 01:02:49.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:02:49.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:02:49.456 "hdgst": ${hdgst:-false}, 01:02:49.456 "ddgst": ${ddgst:-false} 01:02:49.456 }, 01:02:49.456 "method": "bdev_nvme_attach_controller" 01:02:49.456 } 01:02:49.456 EOF 01:02:49.456 )") 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63930 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:02:49.456 { 01:02:49.456 "params": { 01:02:49.456 "name": "Nvme$subsystem", 01:02:49.456 "trtype": "$TEST_TRANSPORT", 01:02:49.456 "traddr": "$NVMF_FIRST_TARGET_IP", 01:02:49.456 "adrfam": "ipv4", 01:02:49.456 "trsvcid": "$NVMF_PORT", 01:02:49.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:02:49.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:02:49.456 "hdgst": ${hdgst:-false}, 01:02:49.456 "ddgst": ${ddgst:-false} 01:02:49.456 }, 01:02:49.456 "method": "bdev_nvme_attach_controller" 01:02:49.456 } 01:02:49.456 EOF 01:02:49.456 )") 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63933 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:02:49.456 { 01:02:49.456 "params": { 01:02:49.456 "name": "Nvme$subsystem", 01:02:49.456 "trtype": "$TEST_TRANSPORT", 01:02:49.456 "traddr": "$NVMF_FIRST_TARGET_IP", 01:02:49.456 "adrfam": "ipv4", 01:02:49.456 "trsvcid": "$NVMF_PORT", 01:02:49.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:02:49.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:02:49.456 "hdgst": ${hdgst:-false}, 01:02:49.456 "ddgst": ${ddgst:-false} 01:02:49.456 }, 01:02:49.456 "method": "bdev_nvme_attach_controller" 01:02:49.456 } 01:02:49.456 EOF 01:02:49.456 )") 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:02:49.456 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:02:49.457 { 01:02:49.457 "params": { 01:02:49.457 "name": "Nvme$subsystem", 01:02:49.457 "trtype": "$TEST_TRANSPORT", 01:02:49.457 "traddr": "$NVMF_FIRST_TARGET_IP", 01:02:49.457 "adrfam": "ipv4", 01:02:49.457 "trsvcid": "$NVMF_PORT", 01:02:49.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:02:49.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:02:49.457 "hdgst": ${hdgst:-false}, 01:02:49.457 "ddgst": ${ddgst:-false} 01:02:49.457 }, 01:02:49.457 "method": "bdev_nvme_attach_controller" 01:02:49.457 } 01:02:49.457 EOF 01:02:49.457 )") 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:02:49.457 "params": { 01:02:49.457 "name": "Nvme1", 01:02:49.457 "trtype": "tcp", 01:02:49.457 "traddr": "10.0.0.3", 01:02:49.457 "adrfam": "ipv4", 01:02:49.457 "trsvcid": "4420", 01:02:49.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:49.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:49.457 "hdgst": false, 01:02:49.457 "ddgst": false 01:02:49.457 }, 01:02:49.457 "method": "bdev_nvme_attach_controller" 01:02:49.457 }' 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:02:49.457 "params": { 01:02:49.457 "name": "Nvme1", 01:02:49.457 "trtype": "tcp", 01:02:49.457 "traddr": "10.0.0.3", 01:02:49.457 "adrfam": "ipv4", 01:02:49.457 "trsvcid": "4420", 01:02:49.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:49.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:49.457 "hdgst": false, 01:02:49.457 "ddgst": false 01:02:49.457 }, 01:02:49.457 "method": "bdev_nvme_attach_controller" 01:02:49.457 }' 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:02:49.457 "params": { 01:02:49.457 "name": "Nvme1", 01:02:49.457 "trtype": "tcp", 01:02:49.457 "traddr": "10.0.0.3", 01:02:49.457 "adrfam": "ipv4", 01:02:49.457 "trsvcid": "4420", 01:02:49.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:49.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:49.457 "hdgst": false, 01:02:49.457 "ddgst": false 01:02:49.457 }, 01:02:49.457 "method": "bdev_nvme_attach_controller" 01:02:49.457 }' 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:02:49.457 "params": { 01:02:49.457 "name": "Nvme1", 01:02:49.457 "trtype": "tcp", 01:02:49.457 "traddr": "10.0.0.3", 01:02:49.457 "adrfam": "ipv4", 01:02:49.457 "trsvcid": "4420", 01:02:49.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:49.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:49.457 "hdgst": false, 01:02:49.457 "ddgst": false 01:02:49.457 }, 01:02:49.457 "method": "bdev_nvme_attach_controller" 01:02:49.457 }' 01:02:49.457 [2024-12-09 06:01:43.915311] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:49.457 [2024-12-09 06:01:43.915917] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 01:02:49.457 [2024-12-09 06:01:43.922288] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:49.457 [2024-12-09 06:01:43.922461] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 01:02:49.457 [2024-12-09 06:01:43.931411] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:49.457 [2024-12-09 06:01:43.931592] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 01:02:49.457 [2024-12-09 06:01:43.936084] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:49.457 [2024-12-09 06:01:43.936152] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 01:02:49.457 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63926 01:02:49.715 [2024-12-09 06:01:44.126466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:49.715 [2024-12-09 06:01:44.169205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:02:49.715 [2024-12-09 06:01:44.181037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:49.715 [2024-12-09 06:01:44.209984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:49.715 [2024-12-09 06:01:44.253009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:02:49.715 [2024-12-09 06:01:44.264769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:49.715 [2024-12-09 06:01:44.282987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:49.973 [2024-12-09 06:01:44.324744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:02:49.973 [2024-12-09 06:01:44.336592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:49.973 Running I/O for 1 seconds... 01:02:49.973 [2024-12-09 06:01:44.382749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:49.973 Running I/O for 1 seconds... 01:02:49.973 [2024-12-09 06:01:44.432761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 01:02:49.973 [2024-12-09 06:01:44.444684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:49.973 Running I/O for 1 seconds... 01:02:50.232 Running I/O for 1 seconds... 01:02:51.057 8716.00 IOPS, 34.05 MiB/s [2024-12-09T06:01:45.644Z] 217968.00 IOPS, 851.44 MiB/s 01:02:51.057 Latency(us) 01:02:51.057 [2024-12-09T06:01:45.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:51.057 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 01:02:51.057 Nvme1n1 : 1.01 8761.29 34.22 0.00 0.00 14537.76 7527.43 18529.05 01:02:51.057 [2024-12-09T06:01:45.644Z] =================================================================================================================== 01:02:51.057 [2024-12-09T06:01:45.644Z] Total : 8761.29 34.22 0.00 0.00 14537.76 7527.43 18529.05 01:02:51.057 01:02:51.057 Latency(us) 01:02:51.057 [2024-12-09T06:01:45.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:51.057 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 01:02:51.057 Nvme1n1 : 1.00 217603.68 850.01 0.00 0.00 585.78 286.23 1644.98 01:02:51.057 [2024-12-09T06:01:45.644Z] =================================================================================================================== 01:02:51.057 [2024-12-09T06:01:45.644Z] Total : 217603.68 850.01 0.00 0.00 585.78 286.23 1644.98 01:02:51.057 6238.00 IOPS, 24.37 MiB/s 01:02:51.057 Latency(us) 01:02:51.057 [2024-12-09T06:01:45.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:51.057 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 01:02:51.057 Nvme1n1 : 1.01 6292.01 24.58 0.00 0.00 20214.88 10948.99 29056.93 01:02:51.057 [2024-12-09T06:01:45.644Z] =================================================================================================================== 01:02:51.057 [2024-12-09T06:01:45.644Z] Total : 6292.01 24.58 0.00 0.00 20214.88 10948.99 29056.93 01:02:51.057 5386.00 IOPS, 21.04 MiB/s [2024-12-09T06:01:45.644Z] 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63928 01:02:51.057 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63930 01:02:51.057 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63933 01:02:51.057 01:02:51.057 Latency(us) 01:02:51.057 [2024-12-09T06:01:45.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:51.057 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 01:02:51.057 Nvme1n1 : 1.01 5473.59 21.38 0.00 0.00 23269.74 8685.49 42111.49 01:02:51.057 [2024-12-09T06:01:45.644Z] =================================================================================================================== 01:02:51.057 [2024-12-09T06:01:45.644Z] Total : 5473.59 21.38 0.00 0.00 23269.74 8685.49 42111.49 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 01:02:51.316 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:02:51.316 rmmod nvme_tcp 01:02:51.316 rmmod nvme_fabrics 01:02:51.575 rmmod nvme_keyring 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63891 ']' 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63891 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63891 ']' 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63891 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63891 01:02:51.575 killing process with pid 63891 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63891' 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63891 01:02:51.575 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63891 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:02:51.575 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:51.834 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:52.093 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 01:02:52.093 01:02:52.093 real 0m4.531s 01:02:52.093 user 0m17.125s 01:02:52.093 sys 0m2.561s 01:02:52.093 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 01:02:52.093 ************************************ 01:02:52.093 END TEST nvmf_bdev_io_wait 01:02:52.093 ************************************ 01:02:52.093 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:02:52.093 06:01:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:02:52.093 06:01:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:02:52.093 06:01:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:02:52.093 06:01:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:02:52.093 ************************************ 01:02:52.093 START TEST nvmf_queue_depth 01:02:52.093 ************************************ 01:02:52.093 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:02:52.093 * Looking for test storage... 01:02:52.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 01:02:52.353 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:02:52.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:52.354 --rc genhtml_branch_coverage=1 01:02:52.354 --rc genhtml_function_coverage=1 01:02:52.354 --rc genhtml_legend=1 01:02:52.354 --rc geninfo_all_blocks=1 01:02:52.354 --rc geninfo_unexecuted_blocks=1 01:02:52.354 01:02:52.354 ' 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:02:52.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:52.354 --rc genhtml_branch_coverage=1 01:02:52.354 --rc genhtml_function_coverage=1 01:02:52.354 --rc genhtml_legend=1 01:02:52.354 --rc geninfo_all_blocks=1 01:02:52.354 --rc geninfo_unexecuted_blocks=1 01:02:52.354 01:02:52.354 ' 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:02:52.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:52.354 --rc genhtml_branch_coverage=1 01:02:52.354 --rc genhtml_function_coverage=1 01:02:52.354 --rc genhtml_legend=1 01:02:52.354 --rc geninfo_all_blocks=1 01:02:52.354 --rc geninfo_unexecuted_blocks=1 01:02:52.354 01:02:52.354 ' 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:02:52.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:02:52.354 --rc genhtml_branch_coverage=1 01:02:52.354 --rc genhtml_function_coverage=1 01:02:52.354 --rc genhtml_legend=1 01:02:52.354 --rc geninfo_all_blocks=1 01:02:52.354 --rc geninfo_unexecuted_blocks=1 01:02:52.354 01:02:52.354 ' 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:02:52.354 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 01:02:52.354 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:02:52.355 Cannot find device "nvmf_init_br" 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:02:52.355 Cannot find device "nvmf_init_br2" 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:02:52.355 Cannot find device "nvmf_tgt_br" 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:02:52.355 Cannot find device "nvmf_tgt_br2" 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:02:52.355 Cannot find device "nvmf_init_br" 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 01:02:52.355 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:02:52.614 Cannot find device "nvmf_init_br2" 01:02:52.614 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 01:02:52.614 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:02:52.614 Cannot find device "nvmf_tgt_br" 01:02:52.614 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 01:02:52.614 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:02:52.614 Cannot find device "nvmf_tgt_br2" 01:02:52.614 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 01:02:52.614 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:02:52.614 Cannot find device "nvmf_br" 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:02:52.614 Cannot find device "nvmf_init_if" 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:02:52.614 Cannot find device "nvmf_init_if2" 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:02:52.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:02:52.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:02:52.614 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:02:52.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:02:52.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 01:02:52.874 01:02:52.874 --- 10.0.0.3 ping statistics --- 01:02:52.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:52.874 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:02:52.874 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:02:52.874 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 01:02:52.874 01:02:52.874 --- 10.0.0.4 ping statistics --- 01:02:52.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:52.874 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:02:52.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:02:52.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 01:02:52.874 01:02:52.874 --- 10.0.0.1 ping statistics --- 01:02:52.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:52.874 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:02:52.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:02:52.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 01:02:52.874 01:02:52.874 --- 10.0.0.2 ping statistics --- 01:02:52.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:02:52.874 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64221 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64221 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64221 ']' 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:52.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:52.874 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:02:52.874 [2024-12-09 06:01:47.421893] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:52.874 [2024-12-09 06:01:47.422313] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:53.132 [2024-12-09 06:01:47.575926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:53.132 [2024-12-09 06:01:47.632649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:53.132 [2024-12-09 06:01:47.632691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:53.132 [2024-12-09 06:01:47.632701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:53.132 [2024-12-09 06:01:47.632709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:53.132 [2024-12-09 06:01:47.632715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:53.132 [2024-12-09 06:01:47.633075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:53.132 [2024-12-09 06:01:47.710330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:53.699 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:53.699 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:02:53.699 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:53.699 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:53.699 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:53.959 [2024-12-09 06:01:48.345549] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:53.959 Malloc0 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:53.959 [2024-12-09 06:01:48.415973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64253 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64253 /var/tmp/bdevperf.sock 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64253 ']' 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:53.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:53.959 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:53.959 [2024-12-09 06:01:48.472730] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:53.959 [2024-12-09 06:01:48.472794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64253 ] 01:02:54.219 [2024-12-09 06:01:48.623671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:54.219 [2024-12-09 06:01:48.662856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:54.219 [2024-12-09 06:01:48.703278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:02:54.787 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:54.787 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:02:54.787 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:02:54.787 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:54.787 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:02:55.046 NVMe0n1 01:02:55.046 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:55.046 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:02:55.046 Running I/O for 10 seconds... 01:02:56.920 9471.00 IOPS, 37.00 MiB/s [2024-12-09T06:01:52.884Z] 10058.50 IOPS, 39.29 MiB/s [2024-12-09T06:01:53.818Z] 10270.67 IOPS, 40.12 MiB/s [2024-12-09T06:01:54.752Z] 10534.75 IOPS, 41.15 MiB/s [2024-12-09T06:01:55.686Z] 10713.20 IOPS, 41.85 MiB/s [2024-12-09T06:01:56.620Z] 10803.83 IOPS, 42.20 MiB/s [2024-12-09T06:01:57.557Z] 10924.14 IOPS, 42.67 MiB/s [2024-12-09T06:01:58.495Z] 11020.62 IOPS, 43.05 MiB/s [2024-12-09T06:01:59.912Z] 11061.44 IOPS, 43.21 MiB/s [2024-12-09T06:01:59.912Z] 11139.20 IOPS, 43.51 MiB/s 01:03:05.325 Latency(us) 01:03:05.325 [2024-12-09T06:01:59.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:05.325 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 01:03:05.325 Verification LBA range: start 0x0 length 0x4000 01:03:05.325 NVMe0n1 : 10.07 11167.99 43.62 0.00 0.00 91332.39 18107.94 66957.26 01:03:05.325 [2024-12-09T06:01:59.912Z] =================================================================================================================== 01:03:05.325 [2024-12-09T06:01:59.912Z] Total : 11167.99 43.62 0.00 0.00 91332.39 18107.94 66957.26 01:03:05.325 { 01:03:05.325 "results": [ 01:03:05.325 { 01:03:05.325 "job": "NVMe0n1", 01:03:05.325 "core_mask": "0x1", 01:03:05.325 "workload": "verify", 01:03:05.325 "status": "finished", 01:03:05.325 "verify_range": { 01:03:05.325 "start": 0, 01:03:05.325 "length": 16384 01:03:05.325 }, 01:03:05.325 "queue_depth": 1024, 01:03:05.325 "io_size": 4096, 01:03:05.325 "runtime": 10.065908, 01:03:05.325 "iops": 11167.993985242067, 01:03:05.325 "mibps": 43.62497650485182, 01:03:05.325 "io_failed": 0, 01:03:05.325 "io_timeout": 0, 01:03:05.325 "avg_latency_us": 91332.39107617488, 01:03:05.325 "min_latency_us": 18107.938955823294, 01:03:05.325 "max_latency_us": 66957.26265060241 01:03:05.325 } 01:03:05.325 ], 01:03:05.325 "core_count": 1 01:03:05.325 } 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64253 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64253 ']' 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64253 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64253 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:05.325 killing process with pid 64253 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64253' 01:03:05.325 Received shutdown signal, test time was about 10.000000 seconds 01:03:05.325 01:03:05.325 Latency(us) 01:03:05.325 [2024-12-09T06:01:59.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:05.325 [2024-12-09T06:01:59.912Z] =================================================================================================================== 01:03:05.325 [2024-12-09T06:01:59.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64253 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64253 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 01:03:05.325 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:05.326 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 01:03:05.326 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:05.326 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:05.326 rmmod nvme_tcp 01:03:05.326 rmmod nvme_fabrics 01:03:05.326 rmmod nvme_keyring 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64221 ']' 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64221 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64221 ']' 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64221 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64221 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:05.602 killing process with pid 64221 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64221' 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64221 01:03:05.602 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64221 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:05.862 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 01:03:06.122 01:03:06.122 real 0m14.035s 01:03:06.122 user 0m22.479s 01:03:06.122 sys 0m3.237s 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:03:06.122 ************************************ 01:03:06.122 END TEST nvmf_queue_depth 01:03:06.122 ************************************ 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:03:06.122 ************************************ 01:03:06.122 START TEST nvmf_target_multipath 01:03:06.122 ************************************ 01:03:06.122 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:03:06.383 * Looking for test storage... 01:03:06.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:06.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:06.383 --rc genhtml_branch_coverage=1 01:03:06.383 --rc genhtml_function_coverage=1 01:03:06.383 --rc genhtml_legend=1 01:03:06.383 --rc geninfo_all_blocks=1 01:03:06.383 --rc geninfo_unexecuted_blocks=1 01:03:06.383 01:03:06.383 ' 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:06.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:06.383 --rc genhtml_branch_coverage=1 01:03:06.383 --rc genhtml_function_coverage=1 01:03:06.383 --rc genhtml_legend=1 01:03:06.383 --rc geninfo_all_blocks=1 01:03:06.383 --rc geninfo_unexecuted_blocks=1 01:03:06.383 01:03:06.383 ' 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:06.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:06.383 --rc genhtml_branch_coverage=1 01:03:06.383 --rc genhtml_function_coverage=1 01:03:06.383 --rc genhtml_legend=1 01:03:06.383 --rc geninfo_all_blocks=1 01:03:06.383 --rc geninfo_unexecuted_blocks=1 01:03:06.383 01:03:06.383 ' 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:06.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:06.383 --rc genhtml_branch_coverage=1 01:03:06.383 --rc genhtml_function_coverage=1 01:03:06.383 --rc genhtml_legend=1 01:03:06.383 --rc geninfo_all_blocks=1 01:03:06.383 --rc geninfo_unexecuted_blocks=1 01:03:06.383 01:03:06.383 ' 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:06.383 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:06.384 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:06.384 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:06.384 Cannot find device "nvmf_init_br" 01:03:06.644 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 01:03:06.644 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:06.644 Cannot find device "nvmf_init_br2" 01:03:06.644 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 01:03:06.644 06:02:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:06.644 Cannot find device "nvmf_tgt_br" 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:06.644 Cannot find device "nvmf_tgt_br2" 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:06.644 Cannot find device "nvmf_init_br" 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:06.644 Cannot find device "nvmf_init_br2" 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:06.644 Cannot find device "nvmf_tgt_br" 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:06.644 Cannot find device "nvmf_tgt_br2" 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:06.644 Cannot find device "nvmf_br" 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:06.644 Cannot find device "nvmf_init_if" 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:06.644 Cannot find device "nvmf_init_if2" 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:06.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:06.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:06.644 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:06.904 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:07.163 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:07.163 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:07.163 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:07.163 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:07.163 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:07.163 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:07.163 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 01:03:07.163 01:03:07.163 --- 10.0.0.3 ping statistics --- 01:03:07.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:07.163 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 01:03:07.163 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:07.163 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:07.163 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 01:03:07.163 01:03:07.163 --- 10.0.0.4 ping statistics --- 01:03:07.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:07.163 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:03:07.163 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:07.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:07.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 01:03:07.164 01:03:07.164 --- 10.0.0.1 ping statistics --- 01:03:07.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:07.164 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:07.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:07.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 01:03:07.164 01:03:07.164 --- 10.0.0.2 ping statistics --- 01:03:07.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:07.164 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64633 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64633 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64633 ']' 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:07.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:07.164 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:03:07.164 [2024-12-09 06:02:01.654796] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:07.164 [2024-12-09 06:02:01.654864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:07.422 [2024-12-09 06:02:01.809452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:07.422 [2024-12-09 06:02:01.850037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:07.422 [2024-12-09 06:02:01.850082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:07.422 [2024-12-09 06:02:01.850098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:07.422 [2024-12-09 06:02:01.850106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:07.422 [2024-12-09 06:02:01.850113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:07.422 [2024-12-09 06:02:01.850937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:07.422 [2024-12-09 06:02:01.851300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:07.422 [2024-12-09 06:02:01.851989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:07.422 [2024-12-09 06:02:01.851991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:07.422 [2024-12-09 06:02:01.894705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:03:07.987 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:07.987 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 01:03:07.987 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:07.987 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:07.987 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:03:07.987 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:07.987 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:03:08.245 [2024-12-09 06:02:02.737379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:08.245 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:03:08.502 Malloc0 01:03:08.502 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 01:03:08.760 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:09.018 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:09.018 [2024-12-09 06:02:03.558329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:09.018 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 01:03:09.277 [2024-12-09 06:02:03.746277] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 01:03:09.277 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid=bac40580-41f0-4da4-8cd9-1be4901a67b8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 01:03:09.535 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid=bac40580-41f0-4da4-8cd9-1be4901a67b8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 01:03:09.535 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 01:03:09.535 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 01:03:09.535 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:03:09.535 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:03:09.535 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 01:03:12.065 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64723 01:03:12.066 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:03:12.066 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 01:03:12.066 [global] 01:03:12.066 thread=1 01:03:12.066 invalidate=1 01:03:12.066 rw=randrw 01:03:12.066 time_based=1 01:03:12.066 runtime=6 01:03:12.066 ioengine=libaio 01:03:12.066 direct=1 01:03:12.066 bs=4096 01:03:12.066 iodepth=128 01:03:12.066 norandommap=0 01:03:12.066 numjobs=1 01:03:12.066 01:03:12.066 verify_dump=1 01:03:12.066 verify_backlog=512 01:03:12.066 verify_state_save=0 01:03:12.066 do_verify=1 01:03:12.066 verify=crc32c-intel 01:03:12.066 [job0] 01:03:12.066 filename=/dev/nvme0n1 01:03:12.066 Could not set queue depth (nvme0n1) 01:03:12.066 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:03:12.066 fio-3.35 01:03:12.066 Starting 1 thread 01:03:12.633 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:03:12.892 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:03:13.150 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:03:13.409 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64723 01:03:18.688 01:03:18.688 job0: (groupid=0, jobs=1): err= 0: pid=64749: Mon Dec 9 06:02:12 2024 01:03:18.688 read: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(307MiB/5999msec) 01:03:18.688 slat (usec): min=4, max=5436, avg=41.78, stdev=142.57 01:03:18.688 clat (usec): min=963, max=17872, avg=6718.24, stdev=1254.47 01:03:18.688 lat (usec): min=990, max=17895, avg=6760.02, stdev=1259.42 01:03:18.688 clat percentiles (usec): 01:03:18.688 | 1.00th=[ 3982], 5.00th=[ 4883], 10.00th=[ 5604], 20.00th=[ 6128], 01:03:18.688 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6587], 60.00th=[ 6652], 01:03:18.688 | 70.00th=[ 6783], 80.00th=[ 7111], 90.00th=[ 7963], 95.00th=[ 9634], 01:03:18.688 | 99.00th=[10552], 99.50th=[10945], 99.90th=[13960], 99.95th=[15139], 01:03:18.688 | 99.99th=[17433] 01:03:18.688 bw ( KiB/s): min=11584, max=34712, per=52.55%, avg=27568.00, stdev=8048.82, samples=11 01:03:18.688 iops : min= 2896, max= 8678, avg=6891.82, stdev=2012.14, samples=11 01:03:18.688 write: IOPS=7703, BW=30.1MiB/s (31.6MB/s)(156MiB/5182msec); 0 zone resets 01:03:18.688 slat (usec): min=6, max=1727, avg=53.79, stdev=90.59 01:03:18.688 clat (usec): min=894, max=17348, avg=5793.85, stdev=1156.97 01:03:18.688 lat (usec): min=957, max=17377, avg=5847.63, stdev=1159.15 01:03:18.688 clat percentiles (usec): 01:03:18.688 | 1.00th=[ 3458], 5.00th=[ 4015], 10.00th=[ 4424], 20.00th=[ 5080], 01:03:18.688 | 30.00th=[ 5473], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 5997], 01:03:18.688 | 70.00th=[ 6128], 80.00th=[ 6325], 90.00th=[ 6652], 95.00th=[ 7308], 01:03:18.688 | 99.00th=[ 9765], 99.50th=[10814], 99.90th=[14484], 99.95th=[15533], 01:03:18.688 | 99.99th=[17171] 01:03:18.688 bw ( KiB/s): min=11952, max=34192, per=89.38%, avg=27541.82, stdev=7750.18, samples=11 01:03:18.688 iops : min= 2988, max= 8548, avg=6885.45, stdev=1937.54, samples=11 01:03:18.688 lat (usec) : 1000=0.01% 01:03:18.688 lat (msec) : 2=0.11%, 4=2.13%, 10=95.34%, 20=2.41% 01:03:18.688 cpu : usr=7.10%, sys=33.89%, ctx=7326, majf=0, minf=90 01:03:18.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 01:03:18.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:18.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:03:18.688 issued rwts: total=78678,39918,0,0 short=0,0,0,0 dropped=0,0,0,0 01:03:18.688 latency : target=0, window=0, percentile=100.00%, depth=128 01:03:18.688 01:03:18.688 Run status group 0 (all jobs): 01:03:18.688 READ: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=307MiB (322MB), run=5999-5999msec 01:03:18.688 WRITE: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=156MiB (164MB), run=5182-5182msec 01:03:18.688 01:03:18.688 Disk stats (read/write): 01:03:18.688 nvme0n1: ios=77512/39339, merge=0/0, ticks=478584/199303, in_queue=677887, util=98.66% 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64827 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:03:18.688 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 01:03:18.688 [global] 01:03:18.688 thread=1 01:03:18.688 invalidate=1 01:03:18.688 rw=randrw 01:03:18.688 time_based=1 01:03:18.688 runtime=6 01:03:18.688 ioengine=libaio 01:03:18.688 direct=1 01:03:18.688 bs=4096 01:03:18.688 iodepth=128 01:03:18.688 norandommap=0 01:03:18.688 numjobs=1 01:03:18.688 01:03:18.688 verify_dump=1 01:03:18.688 verify_backlog=512 01:03:18.688 verify_state_save=0 01:03:18.688 do_verify=1 01:03:18.688 verify=crc32c-intel 01:03:18.688 [job0] 01:03:18.688 filename=/dev/nvme0n1 01:03:18.688 Could not set queue depth (nvme0n1) 01:03:18.688 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:03:18.688 fio-3.35 01:03:18.688 Starting 1 thread 01:03:19.627 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:03:19.627 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:03:19.885 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:03:20.144 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64827 01:03:25.408 01:03:25.408 job0: (groupid=0, jobs=1): err= 0: pid=64848: Mon Dec 9 06:02:19 2024 01:03:25.408 read: IOPS=12.9k, BW=50.6MiB/s (53.0MB/s)(304MiB/6002msec) 01:03:25.408 slat (usec): min=5, max=4965, avg=38.15, stdev=134.03 01:03:25.408 clat (usec): min=377, max=20238, avg=6846.41, stdev=2301.86 01:03:25.408 lat (usec): min=401, max=20254, avg=6884.56, stdev=2304.73 01:03:25.408 clat percentiles (usec): 01:03:25.408 | 1.00th=[ 1303], 5.00th=[ 3752], 10.00th=[ 4752], 20.00th=[ 5800], 01:03:25.408 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6849], 01:03:25.408 | 70.00th=[ 6980], 80.00th=[ 7373], 90.00th=[ 9372], 95.00th=[10552], 01:03:25.408 | 99.00th=[15926], 99.50th=[16909], 99.90th=[18744], 99.95th=[19268], 01:03:25.408 | 99.99th=[19530] 01:03:25.408 bw ( KiB/s): min=11504, max=33528, per=52.55%, avg=27214.55, stdev=7444.19, samples=11 01:03:25.408 iops : min= 2876, max= 8382, avg=6803.82, stdev=1861.09, samples=11 01:03:25.408 write: IOPS=7508, BW=29.3MiB/s (30.8MB/s)(154MiB/5253msec); 0 zone resets 01:03:25.408 slat (usec): min=15, max=2654, avg=50.85, stdev=82.46 01:03:25.408 clat (usec): min=355, max=18987, avg=5805.69, stdev=2221.73 01:03:25.408 lat (usec): min=392, max=19033, avg=5856.54, stdev=2223.80 01:03:25.408 clat percentiles (usec): 01:03:25.408 | 1.00th=[ 1012], 5.00th=[ 2606], 10.00th=[ 3687], 20.00th=[ 4424], 01:03:25.408 | 30.00th=[ 5014], 40.00th=[ 5538], 50.00th=[ 5800], 60.00th=[ 6063], 01:03:25.408 | 70.00th=[ 6259], 80.00th=[ 6521], 90.00th=[ 7242], 95.00th=[ 9765], 01:03:25.408 | 99.00th=[14353], 99.50th=[15008], 99.90th=[16909], 99.95th=[17957], 01:03:25.408 | 99.99th=[18482] 01:03:25.408 bw ( KiB/s): min=12272, max=33192, per=90.47%, avg=27172.36, stdev=7087.42, samples=11 01:03:25.408 iops : min= 3068, max= 8298, avg=6793.09, stdev=1771.86, samples=11 01:03:25.408 lat (usec) : 500=0.04%, 750=0.17%, 1000=0.36% 01:03:25.408 lat (msec) : 2=2.35%, 4=5.21%, 10=85.41%, 20=6.45%, 50=0.01% 01:03:25.408 cpu : usr=7.93%, sys=32.83%, ctx=7990, majf=0, minf=90 01:03:25.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 01:03:25.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:25.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:03:25.408 issued rwts: total=77703,39441,0,0 short=0,0,0,0 dropped=0,0,0,0 01:03:25.408 latency : target=0, window=0, percentile=100.00%, depth=128 01:03:25.408 01:03:25.408 Run status group 0 (all jobs): 01:03:25.408 READ: bw=50.6MiB/s (53.0MB/s), 50.6MiB/s-50.6MiB/s (53.0MB/s-53.0MB/s), io=304MiB (318MB), run=6002-6002msec 01:03:25.408 WRITE: bw=29.3MiB/s (30.8MB/s), 29.3MiB/s-29.3MiB/s (30.8MB/s-30.8MB/s), io=154MiB (162MB), run=5253-5253msec 01:03:25.408 01:03:25.408 Disk stats (read/write): 01:03:25.408 nvme0n1: ios=76548/38828, merge=0/0, ticks=483669/198627, in_queue=682296, util=98.58% 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:03:25.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:25.408 rmmod nvme_tcp 01:03:25.408 rmmod nvme_fabrics 01:03:25.408 rmmod nvme_keyring 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64633 ']' 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64633 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64633 ']' 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64633 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64633 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64633' 01:03:25.408 killing process with pid 64633 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64633 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64633 01:03:25.408 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:25.409 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:25.409 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:25.409 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 01:03:25.668 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 01:03:25.668 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:25.668 06:02:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:25.668 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:25.928 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 01:03:25.928 ************************************ 01:03:25.928 END TEST nvmf_target_multipath 01:03:25.928 ************************************ 01:03:25.928 01:03:25.928 real 0m19.634s 01:03:25.928 user 1m11.790s 01:03:25.928 sys 0m10.145s 01:03:25.928 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:25.928 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:03:25.928 06:02:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:03:25.928 06:02:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:25.928 06:02:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:25.928 06:02:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:03:25.928 ************************************ 01:03:25.928 START TEST nvmf_zcopy 01:03:25.928 ************************************ 01:03:25.928 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:03:26.189 * Looking for test storage... 01:03:26.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:26.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:26.189 --rc genhtml_branch_coverage=1 01:03:26.189 --rc genhtml_function_coverage=1 01:03:26.189 --rc genhtml_legend=1 01:03:26.189 --rc geninfo_all_blocks=1 01:03:26.189 --rc geninfo_unexecuted_blocks=1 01:03:26.189 01:03:26.189 ' 01:03:26.189 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:26.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:26.189 --rc genhtml_branch_coverage=1 01:03:26.190 --rc genhtml_function_coverage=1 01:03:26.190 --rc genhtml_legend=1 01:03:26.190 --rc geninfo_all_blocks=1 01:03:26.190 --rc geninfo_unexecuted_blocks=1 01:03:26.190 01:03:26.190 ' 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:26.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:26.190 --rc genhtml_branch_coverage=1 01:03:26.190 --rc genhtml_function_coverage=1 01:03:26.190 --rc genhtml_legend=1 01:03:26.190 --rc geninfo_all_blocks=1 01:03:26.190 --rc geninfo_unexecuted_blocks=1 01:03:26.190 01:03:26.190 ' 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:26.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:26.190 --rc genhtml_branch_coverage=1 01:03:26.190 --rc genhtml_function_coverage=1 01:03:26.190 --rc genhtml_legend=1 01:03:26.190 --rc geninfo_all_blocks=1 01:03:26.190 --rc geninfo_unexecuted_blocks=1 01:03:26.190 01:03:26.190 ' 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:26.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:26.190 Cannot find device "nvmf_init_br" 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:26.190 Cannot find device "nvmf_init_br2" 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:26.190 Cannot find device "nvmf_tgt_br" 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:26.190 Cannot find device "nvmf_tgt_br2" 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 01:03:26.190 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:26.449 Cannot find device "nvmf_init_br" 01:03:26.449 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 01:03:26.449 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:26.449 Cannot find device "nvmf_init_br2" 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:26.450 Cannot find device "nvmf_tgt_br" 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:26.450 Cannot find device "nvmf_tgt_br2" 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:26.450 Cannot find device "nvmf_br" 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:26.450 Cannot find device "nvmf_init_if" 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:26.450 Cannot find device "nvmf_init_if2" 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:26.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:26.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:26.450 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:26.450 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:26.450 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:26.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:26.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 01:03:26.710 01:03:26.710 --- 10.0.0.3 ping statistics --- 01:03:26.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:26.710 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:26.710 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:26.710 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.116 ms 01:03:26.710 01:03:26.710 --- 10.0.0.4 ping statistics --- 01:03:26.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:26.710 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:26.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:26.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 01:03:26.710 01:03:26.710 --- 10.0.0.1 ping statistics --- 01:03:26.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:26.710 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:26.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:26.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 01:03:26.710 01:03:26.710 --- 10.0.0.2 ping statistics --- 01:03:26.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:26.710 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65152 01:03:26.710 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:03:26.969 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65152 01:03:26.969 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65152 ']' 01:03:26.969 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:26.969 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:26.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:26.969 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:26.969 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:26.970 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:26.970 [2024-12-09 06:02:21.347420] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:26.970 [2024-12-09 06:02:21.347831] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:26.970 [2024-12-09 06:02:21.494066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:27.228 [2024-12-09 06:02:21.554453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:27.228 [2024-12-09 06:02:21.554498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:27.228 [2024-12-09 06:02:21.554508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:27.228 [2024-12-09 06:02:21.554516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:27.228 [2024-12-09 06:02:21.554524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:27.228 [2024-12-09 06:02:21.554903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:27.228 [2024-12-09 06:02:21.629875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:27.796 [2024-12-09 06:02:22.264864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:27.796 [2024-12-09 06:02:22.288962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:27.796 malloc0 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:03:27.796 { 01:03:27.796 "params": { 01:03:27.796 "name": "Nvme$subsystem", 01:03:27.796 "trtype": "$TEST_TRANSPORT", 01:03:27.796 "traddr": "$NVMF_FIRST_TARGET_IP", 01:03:27.796 "adrfam": "ipv4", 01:03:27.796 "trsvcid": "$NVMF_PORT", 01:03:27.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:03:27.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:03:27.796 "hdgst": ${hdgst:-false}, 01:03:27.796 "ddgst": ${ddgst:-false} 01:03:27.796 }, 01:03:27.796 "method": "bdev_nvme_attach_controller" 01:03:27.796 } 01:03:27.796 EOF 01:03:27.796 )") 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:03:27.796 06:02:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:03:27.796 "params": { 01:03:27.796 "name": "Nvme1", 01:03:27.796 "trtype": "tcp", 01:03:27.796 "traddr": "10.0.0.3", 01:03:27.796 "adrfam": "ipv4", 01:03:27.796 "trsvcid": "4420", 01:03:27.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:27.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:03:27.796 "hdgst": false, 01:03:27.796 "ddgst": false 01:03:27.796 }, 01:03:27.796 "method": "bdev_nvme_attach_controller" 01:03:27.796 }' 01:03:28.055 [2024-12-09 06:02:22.391206] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:28.055 [2024-12-09 06:02:22.391381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65185 ] 01:03:28.055 [2024-12-09 06:02:22.540912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:28.055 [2024-12-09 06:02:22.579728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:28.055 [2024-12-09 06:02:22.628724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:03:28.314 Running I/O for 10 seconds... 01:03:30.191 8379.00 IOPS, 65.46 MiB/s [2024-12-09T06:02:26.153Z] 8464.50 IOPS, 66.13 MiB/s [2024-12-09T06:02:27.086Z] 8470.00 IOPS, 66.17 MiB/s [2024-12-09T06:02:28.020Z] 8494.75 IOPS, 66.37 MiB/s [2024-12-09T06:02:28.975Z] 8509.40 IOPS, 66.48 MiB/s [2024-12-09T06:02:29.911Z] 8514.00 IOPS, 66.52 MiB/s [2024-12-09T06:02:30.846Z] 8528.71 IOPS, 66.63 MiB/s [2024-12-09T06:02:31.822Z] 8523.25 IOPS, 66.59 MiB/s [2024-12-09T06:02:32.773Z] 8519.11 IOPS, 66.56 MiB/s [2024-12-09T06:02:32.773Z] 8522.00 IOPS, 66.58 MiB/s 01:03:38.186 Latency(us) 01:03:38.186 [2024-12-09T06:02:32.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:38.186 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 01:03:38.186 Verification LBA range: start 0x0 length 0x1000 01:03:38.186 Nvme1n1 : 10.01 8523.41 66.59 0.00 0.00 14976.71 1112.01 24845.78 01:03:38.186 [2024-12-09T06:02:32.773Z] =================================================================================================================== 01:03:38.186 [2024-12-09T06:02:32.773Z] Total : 8523.41 66.59 0.00 0.00 14976.71 1112.01 24845.78 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65308 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:03:38.446 { 01:03:38.446 "params": { 01:03:38.446 "name": "Nvme$subsystem", 01:03:38.446 "trtype": "$TEST_TRANSPORT", 01:03:38.446 "traddr": "$NVMF_FIRST_TARGET_IP", 01:03:38.446 "adrfam": "ipv4", 01:03:38.446 "trsvcid": "$NVMF_PORT", 01:03:38.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:03:38.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:03:38.446 "hdgst": ${hdgst:-false}, 01:03:38.446 "ddgst": ${ddgst:-false} 01:03:38.446 }, 01:03:38.446 "method": "bdev_nvme_attach_controller" 01:03:38.446 } 01:03:38.446 EOF 01:03:38.446 )") 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:03:38.446 [2024-12-09 06:02:32.904577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.446 [2024-12-09 06:02:32.904761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:03:38.446 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:03:38.446 "params": { 01:03:38.446 "name": "Nvme1", 01:03:38.446 "trtype": "tcp", 01:03:38.446 "traddr": "10.0.0.3", 01:03:38.446 "adrfam": "ipv4", 01:03:38.446 "trsvcid": "4420", 01:03:38.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:38.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:03:38.446 "hdgst": false, 01:03:38.446 "ddgst": false 01:03:38.446 }, 01:03:38.446 "method": "bdev_nvme_attach_controller" 01:03:38.446 }' 01:03:38.446 [2024-12-09 06:02:32.920516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.446 [2024-12-09 06:02:32.920648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.446 [2024-12-09 06:02:32.930082] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:38.446 [2024-12-09 06:02:32.930285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65308 ] 01:03:38.446 [2024-12-09 06:02:32.936489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.446 [2024-12-09 06:02:32.936619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.446 [2024-12-09 06:02:32.952459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.446 [2024-12-09 06:02:32.952479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.446 [2024-12-09 06:02:32.968431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.446 [2024-12-09 06:02:32.968452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.446 [2024-12-09 06:02:32.984405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.446 [2024-12-09 06:02:32.984424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.446 [2024-12-09 06:02:33.000385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.446 [2024-12-09 06:02:33.000405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.446 [2024-12-09 06:02:33.016369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.446 [2024-12-09 06:02:33.016388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.032339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.032359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.048316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.048335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.064303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.064326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.080284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.080303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.080863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:38.705 [2024-12-09 06:02:33.096249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.096270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.112222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.112239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.120623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:38.705 [2024-12-09 06:02:33.128198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.128217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.144174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.144192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.160154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.160173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.169838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:03:38.705 [2024-12-09 06:02:33.176156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.176176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.192142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.192163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.208121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.208138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.224131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.224159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.240135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.240162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.256095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.256127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 [2024-12-09 06:02:33.272098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.272126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.705 Running I/O for 5 seconds... 01:03:38.705 [2024-12-09 06:02:33.288068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.705 [2024-12-09 06:02:33.288101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.307774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.307802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.325969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.325999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.343527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.343555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.359094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.359136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.378043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.378073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.396024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.396053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.413846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.413874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.431459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.431486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.448646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.448799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.466040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.466070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.483529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.483662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.498894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.499014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.517148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.517177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:38.964 [2024-12-09 06:02:33.534922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:38.964 [2024-12-09 06:02:33.534950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.550331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.550359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.569866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.569895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.584789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.584926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.604193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.604222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.621742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.621771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.639635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.639768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.657498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.657527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.675548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.675577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.692984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.693013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.710433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.710554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.725571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.725698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.744857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.744989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.763032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.763062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.223 [2024-12-09 06:02:33.777812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.223 [2024-12-09 06:02:33.777839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.224 [2024-12-09 06:02:33.797000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.224 [2024-12-09 06:02:33.797029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.811704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.811826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.829690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.829719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.847698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.847725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.865523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.865552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.880964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.880994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.899526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.899554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.917242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.917269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.932465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.932492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.951237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.951265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.968581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.968609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:33.986581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:33.986726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:34.004254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:34.004281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:34.022062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:34.022208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:34.037169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:34.037197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.482 [2024-12-09 06:02:34.057073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.482 [2024-12-09 06:02:34.057233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.741 [2024-12-09 06:02:34.074843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.741 [2024-12-09 06:02:34.074872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.741 [2024-12-09 06:02:34.092597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.741 [2024-12-09 06:02:34.092726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.741 [2024-12-09 06:02:34.110660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.741 [2024-12-09 06:02:34.110688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.741 [2024-12-09 06:02:34.125728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.741 [2024-12-09 06:02:34.125756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.741 [2024-12-09 06:02:34.144711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.741 [2024-12-09 06:02:34.144739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.741 [2024-12-09 06:02:34.161731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 [2024-12-09 06:02:34.161759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.742 [2024-12-09 06:02:34.179045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 [2024-12-09 06:02:34.179072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.742 [2024-12-09 06:02:34.196764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 [2024-12-09 06:02:34.196791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.742 [2024-12-09 06:02:34.214232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 [2024-12-09 06:02:34.214260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.742 [2024-12-09 06:02:34.231761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 [2024-12-09 06:02:34.231912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.742 [2024-12-09 06:02:34.246417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 [2024-12-09 06:02:34.246569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.742 [2024-12-09 06:02:34.263887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 [2024-12-09 06:02:34.263917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.742 [2024-12-09 06:02:34.281280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 16041.00 IOPS, 125.32 MiB/s [2024-12-09T06:02:34.329Z] [2024-12-09 06:02:34.281411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.742 [2024-12-09 06:02:34.298880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 [2024-12-09 06:02:34.298910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:39.742 [2024-12-09 06:02:34.314075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:39.742 [2024-12-09 06:02:34.314119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.332713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.332844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.350667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.350696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.368641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.368669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.386241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.386269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.403878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.404021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.421569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.421598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.439338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.439366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.456251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.456279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.473796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.473824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.490795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.490922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.508615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.508643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.526576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.526604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.540790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.540819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.557869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.557990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.001 [2024-12-09 06:02:34.575693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.001 [2024-12-09 06:02:34.575721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.590778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.590806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.610054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.610083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.627846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.627875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.645549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.645576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.663055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.663084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.680615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.680731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.698310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.698339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.715764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.715887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.730519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.730662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.746151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.746179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.763531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.763559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.781058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.781101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.798674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.798702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.816274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.816300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.260 [2024-12-09 06:02:34.833895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.260 [2024-12-09 06:02:34.834039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:34.851819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:34.851847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:34.869322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:34.869455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:34.887611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:34.887640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:34.905630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:34.905658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:34.923376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:34.923404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:34.941124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:34.941153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:34.958646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:34.958674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:34.976117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:34.976144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:34.993734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:34.993762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:35.011206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:35.011233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:35.025511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:35.025540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:35.042712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:35.042864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:35.060297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:35.060324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:35.077801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:35.077927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.520 [2024-12-09 06:02:35.095098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.520 [2024-12-09 06:02:35.095125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.112468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.112598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.129867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.129896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.147149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.147176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.164364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.164391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.181677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.181706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.196303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.196333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.212072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.212114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.229646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.229674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.244500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.244631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.260275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.260408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.278032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.278195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 16242.50 IOPS, 126.89 MiB/s [2024-12-09T06:02:35.367Z] [2024-12-09 06:02:35.295486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.295514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.310308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.310443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.329566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.329711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.344296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.344324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:40.780 [2024-12-09 06:02:35.361380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:40.780 [2024-12-09 06:02:35.361410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.378461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.378490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.396404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.396432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.411741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.411869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.427711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.427741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.445430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.445468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.463266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.463293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.480297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.480325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.498032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.498061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.515622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.515650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.532885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.532912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.549935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.549963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.564668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.564801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.580848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.580877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.598828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.598855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.039 [2024-12-09 06:02:35.616503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.039 [2024-12-09 06:02:35.616530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.634371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.634400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.651626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.651653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.669232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.669258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.686907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.686936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.703975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.704003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.721648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.721676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.739550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.739578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.754060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.754102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.771242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.771268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.788723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.788846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.806788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.806817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.824163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.824188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.841752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.841781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.859300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.859423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.298 [2024-12-09 06:02:35.876775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.298 [2024-12-09 06:02:35.876804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:35.894994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:35.895125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:35.912588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:35.912617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:35.930445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:35.930475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:35.945825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:35.945948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:35.964844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:35.964984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:35.982455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:35.982484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:35.996974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:35.997002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:36.012339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:36.012366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:36.029516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:36.029544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:36.043721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:36.043845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:36.061211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:36.061238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:36.075270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:36.075296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:36.092502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:36.092530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.556 [2024-12-09 06:02:36.110067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.556 [2024-12-09 06:02:36.110109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.557 [2024-12-09 06:02:36.127612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.557 [2024-12-09 06:02:36.127639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.145351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.145379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.163132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.163159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.180728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.180755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.198129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.198157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.215371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.215399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.232944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.232972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.250450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.250478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.267029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.267058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 16344.67 IOPS, 127.69 MiB/s [2024-12-09T06:02:36.402Z] [2024-12-09 06:02:36.284699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.284727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.302220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.302249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.319816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.319843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.337275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.337301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.354738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.354766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.372377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.372516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:41.815 [2024-12-09 06:02:36.389944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:41.815 [2024-12-09 06:02:36.389973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.407468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.407592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.425054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.425082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.442548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.442679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.460097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.460125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.477779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.477901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.496029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.496059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.511021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.511049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.530452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.530482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.548383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.548410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.565957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.565985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.584121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.584148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.599083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.599127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.618472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.618500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.633241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.633266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.074 [2024-12-09 06:02:36.652484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.074 [2024-12-09 06:02:36.652512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.669888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.669917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.687115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.687141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.704832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.704860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.719360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.719493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.733804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.733833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.749350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.749377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.767016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.767179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.784910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.784938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.802300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.802419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.820117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.820146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.837656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.837782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.855263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.855290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.872857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.872977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.890270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.890297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.333 [2024-12-09 06:02:36.904792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.333 [2024-12-09 06:02:36.904820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.591 [2024-12-09 06:02:36.920930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.591 [2024-12-09 06:02:36.920959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.591 [2024-12-09 06:02:36.935880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.591 [2024-12-09 06:02:36.935996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.591 [2024-12-09 06:02:36.955537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.591 [2024-12-09 06:02:36.955567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.591 [2024-12-09 06:02:36.973351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.591 [2024-12-09 06:02:36.973380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.591 [2024-12-09 06:02:36.990876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.591 [2024-12-09 06:02:36.991015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.591 [2024-12-09 06:02:37.008718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.591 [2024-12-09 06:02:37.008746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.592 [2024-12-09 06:02:37.026019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.592 [2024-12-09 06:02:37.026158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.592 [2024-12-09 06:02:37.043395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.592 [2024-12-09 06:02:37.043423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.592 [2024-12-09 06:02:37.060603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.592 [2024-12-09 06:02:37.060631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.592 [2024-12-09 06:02:37.077772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.592 [2024-12-09 06:02:37.077900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.592 [2024-12-09 06:02:37.094844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.592 [2024-12-09 06:02:37.094873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.592 [2024-12-09 06:02:37.112350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.592 [2024-12-09 06:02:37.112482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.592 [2024-12-09 06:02:37.129683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.592 [2024-12-09 06:02:37.129711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.592 [2024-12-09 06:02:37.146785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.592 [2024-12-09 06:02:37.146813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.592 [2024-12-09 06:02:37.164239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.592 [2024-12-09 06:02:37.164265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.181680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.181709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.199329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.199449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.216577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.216606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.234125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.234153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.251364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.251391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.268359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.268387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 16385.00 IOPS, 128.01 MiB/s [2024-12-09T06:02:37.436Z] [2024-12-09 06:02:37.286120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.286149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.303768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.303796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.320912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.320940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.338244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.338271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.355771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.355799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.373407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.373556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.388586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.388705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.407543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.407668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:42.849 [2024-12-09 06:02:37.425157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:42.849 [2024-12-09 06:02:37.425184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.442354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.442490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.460013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.460041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.477117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.477144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.494605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.494632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.511739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.511767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.529455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.529498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.546909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.546939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.562688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.562716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.581272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.581299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.599568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.599598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.617130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.617158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.634181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.634207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.651752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.651780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.669424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.669459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.107 [2024-12-09 06:02:37.683872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.107 [2024-12-09 06:02:37.683900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.701168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.701195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.715601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.715627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.733015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.733154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.750496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.750524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.768029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.768057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.785860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.785889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.802862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.802890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.820404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.820537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.838325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.838353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.855725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.855847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.873133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.873161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.890659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.890791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.907994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.908023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.922676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.922704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.366 [2024-12-09 06:02:37.938317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.366 [2024-12-09 06:02:37.938346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.624 [2024-12-09 06:02:37.956167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:37.956194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:37.973413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:37.973578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:37.991910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:37.991939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.006896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.006926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.026733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.026761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.044146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.044174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.061591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.061620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.078610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.078639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.096496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.096524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.114516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.114545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.131668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.131808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.149272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.149299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.167243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.167269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.184549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.184593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.625 [2024-12-09 06:02:38.202079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.625 [2024-12-09 06:02:38.202123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.219290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.219317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.234060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.234101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.252849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.252877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.270686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.270827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 16420.80 IOPS, 128.29 MiB/s 01:03:43.883 Latency(us) 01:03:43.883 [2024-12-09T06:02:38.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:43.883 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 01:03:43.883 Nvme1n1 : 5.01 16427.74 128.34 0.00 0.00 7783.69 3158.36 18844.89 01:03:43.883 [2024-12-09T06:02:38.470Z] =================================================================================================================== 01:03:43.883 [2024-12-09T06:02:38.470Z] Total : 16427.74 128.34 0.00 0.00 7783.69 3158.36 18844.89 01:03:43.883 [2024-12-09 06:02:38.284993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.285020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.300961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.300985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.316932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.317044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.332912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.332933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.348887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.348906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.364864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.364883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.380840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.380859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.396821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.396839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.412799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.412816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.428774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.428791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 [2024-12-09 06:02:38.444754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:03:43.883 [2024-12-09 06:02:38.444772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:43.883 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65308) - No such process 01:03:43.883 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65308 01:03:43.883 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:03:43.883 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:43.883 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:43.883 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:43.883 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:03:43.883 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:43.884 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:44.142 delay0 01:03:44.142 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:44.142 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 01:03:44.142 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:44.142 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:44.142 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:44.142 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 01:03:44.142 [2024-12-09 06:02:38.676894] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:03:50.726 Initializing NVMe Controllers 01:03:50.726 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:03:50.726 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:50.726 Initialization complete. Launching workers. 01:03:50.726 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 104 01:03:50.726 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 391, failed to submit 33 01:03:50.726 success 271, unsuccessful 120, failed 0 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:50.726 rmmod nvme_tcp 01:03:50.726 rmmod nvme_fabrics 01:03:50.726 rmmod nvme_keyring 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65152 ']' 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65152 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65152 ']' 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65152 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65152 01:03:50.726 killing process with pid 65152 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65152' 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65152 01:03:50.726 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65152 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:50.726 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 01:03:50.985 01:03:50.985 real 0m25.104s 01:03:50.985 user 0m38.492s 01:03:50.985 sys 0m9.366s 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:03:50.985 ************************************ 01:03:50.985 END TEST nvmf_zcopy 01:03:50.985 ************************************ 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:50.985 06:02:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:03:51.244 ************************************ 01:03:51.244 START TEST nvmf_nmic 01:03:51.244 ************************************ 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:03:51.244 * Looking for test storage... 01:03:51.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:51.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:51.244 --rc genhtml_branch_coverage=1 01:03:51.244 --rc genhtml_function_coverage=1 01:03:51.244 --rc genhtml_legend=1 01:03:51.244 --rc geninfo_all_blocks=1 01:03:51.244 --rc geninfo_unexecuted_blocks=1 01:03:51.244 01:03:51.244 ' 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:51.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:51.244 --rc genhtml_branch_coverage=1 01:03:51.244 --rc genhtml_function_coverage=1 01:03:51.244 --rc genhtml_legend=1 01:03:51.244 --rc geninfo_all_blocks=1 01:03:51.244 --rc geninfo_unexecuted_blocks=1 01:03:51.244 01:03:51.244 ' 01:03:51.244 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:51.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:51.244 --rc genhtml_branch_coverage=1 01:03:51.244 --rc genhtml_function_coverage=1 01:03:51.244 --rc genhtml_legend=1 01:03:51.244 --rc geninfo_all_blocks=1 01:03:51.244 --rc geninfo_unexecuted_blocks=1 01:03:51.244 01:03:51.245 ' 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:51.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:51.245 --rc genhtml_branch_coverage=1 01:03:51.245 --rc genhtml_function_coverage=1 01:03:51.245 --rc genhtml_legend=1 01:03:51.245 --rc geninfo_all_blocks=1 01:03:51.245 --rc geninfo_unexecuted_blocks=1 01:03:51.245 01:03:51.245 ' 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:51.245 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:51.505 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:51.505 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:51.506 Cannot find device "nvmf_init_br" 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:51.506 Cannot find device "nvmf_init_br2" 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:51.506 Cannot find device "nvmf_tgt_br" 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:51.506 Cannot find device "nvmf_tgt_br2" 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:51.506 Cannot find device "nvmf_init_br" 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:51.506 Cannot find device "nvmf_init_br2" 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:51.506 Cannot find device "nvmf_tgt_br" 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 01:03:51.506 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:51.506 Cannot find device "nvmf_tgt_br2" 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:51.506 Cannot find device "nvmf_br" 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:51.506 Cannot find device "nvmf_init_if" 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:51.506 Cannot find device "nvmf_init_if2" 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:51.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:51.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:51.506 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:51.765 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:52.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:52.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 01:03:52.025 01:03:52.025 --- 10.0.0.3 ping statistics --- 01:03:52.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:52.025 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:52.025 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:52.025 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 01:03:52.025 01:03:52.025 --- 10.0.0.4 ping statistics --- 01:03:52.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:52.025 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:52.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:52.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 01:03:52.025 01:03:52.025 --- 10.0.0.1 ping statistics --- 01:03:52.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:52.025 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:52.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:52.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 01:03:52.025 01:03:52.025 --- 10.0.0.2 ping statistics --- 01:03:52.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:52.025 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65692 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65692 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65692 ']' 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:52.025 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:52.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:52.026 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:52.026 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:52.026 06:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:52.026 [2024-12-09 06:02:46.515462] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:52.026 [2024-12-09 06:02:46.515551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:52.285 [2024-12-09 06:02:46.670009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:52.285 [2024-12-09 06:02:46.711135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:52.285 [2024-12-09 06:02:46.711181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:52.285 [2024-12-09 06:02:46.711190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:52.285 [2024-12-09 06:02:46.711197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:52.285 [2024-12-09 06:02:46.711204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:52.285 [2024-12-09 06:02:46.712037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:52.285 [2024-12-09 06:02:46.712226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:52.285 [2024-12-09 06:02:46.712325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:52.285 [2024-12-09 06:02:46.712321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:52.285 [2024-12-09 06:02:46.754449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:03:52.852 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:52.852 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 01:03:52.852 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:52.852 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:52.852 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:52.852 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:52.852 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:52.852 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:52.852 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:52.852 [2024-12-09 06:02:47.432274] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:53.112 Malloc0 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:53.112 [2024-12-09 06:02:47.505511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:53.112 test case1: single bdev can't be used in multiple subsystems 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:53.112 [2024-12-09 06:02:47.541346] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 01:03:53.112 [2024-12-09 06:02:47.541379] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 01:03:53.112 [2024-12-09 06:02:47.541389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:03:53.112 request: 01:03:53.112 { 01:03:53.112 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:03:53.112 "namespace": { 01:03:53.112 "bdev_name": "Malloc0", 01:03:53.112 "no_auto_visible": false, 01:03:53.112 "hide_metadata": false 01:03:53.112 }, 01:03:53.112 "method": "nvmf_subsystem_add_ns", 01:03:53.112 "req_id": 1 01:03:53.112 } 01:03:53.112 Got JSON-RPC error response 01:03:53.112 response: 01:03:53.112 { 01:03:53.112 "code": -32602, 01:03:53.112 "message": "Invalid parameters" 01:03:53.112 } 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 01:03:53.112 Adding namespace failed - expected result. 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 01:03:53.112 test case2: host connect to nvmf target in multiple paths 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:53.112 [2024-12-09 06:02:47.557421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:53.112 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid=bac40580-41f0-4da4-8cd9-1be4901a67b8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:03:53.371 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid=bac40580-41f0-4da4-8cd9-1be4901a67b8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 01:03:53.371 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 01:03:53.371 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 01:03:53.371 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:03:53.371 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:03:53.371 06:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 01:03:55.903 06:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:03:55.903 06:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:03:55.903 06:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:03:55.903 06:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:03:55.903 06:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:03:55.903 06:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 01:03:55.904 06:02:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:03:55.904 [global] 01:03:55.904 thread=1 01:03:55.904 invalidate=1 01:03:55.904 rw=write 01:03:55.904 time_based=1 01:03:55.904 runtime=1 01:03:55.904 ioengine=libaio 01:03:55.904 direct=1 01:03:55.904 bs=4096 01:03:55.904 iodepth=1 01:03:55.904 norandommap=0 01:03:55.904 numjobs=1 01:03:55.904 01:03:55.904 verify_dump=1 01:03:55.904 verify_backlog=512 01:03:55.904 verify_state_save=0 01:03:55.904 do_verify=1 01:03:55.904 verify=crc32c-intel 01:03:55.904 [job0] 01:03:55.904 filename=/dev/nvme0n1 01:03:55.904 Could not set queue depth (nvme0n1) 01:03:55.904 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:03:55.904 fio-3.35 01:03:55.904 Starting 1 thread 01:03:56.840 01:03:56.840 job0: (groupid=0, jobs=1): err= 0: pid=65778: Mon Dec 9 06:02:51 2024 01:03:56.840 read: IOPS=3396, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec) 01:03:56.840 slat (nsec): min=7214, max=25017, avg=7758.35, stdev=1415.39 01:03:56.840 clat (usec): min=104, max=5211, avg=171.10, stdev=204.09 01:03:56.840 lat (usec): min=112, max=5218, avg=178.86, stdev=204.76 01:03:56.840 clat percentiles (usec): 01:03:56.840 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 123], 20.00th=[ 131], 01:03:56.840 | 30.00th=[ 141], 40.00th=[ 149], 50.00th=[ 159], 60.00th=[ 169], 01:03:56.840 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 210], 01:03:56.840 | 99.00th=[ 233], 99.50th=[ 247], 99.90th=[ 3687], 99.95th=[ 3785], 01:03:56.840 | 99.99th=[ 5211] 01:03:56.840 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 01:03:56.840 slat (usec): min=10, max=104, avg=12.43, stdev= 5.11 01:03:56.840 clat (usec): min=61, max=605, avg=95.23, stdev=21.99 01:03:56.840 lat (usec): min=72, max=620, avg=107.66, stdev=23.28 01:03:56.840 clat percentiles (usec): 01:03:56.840 | 1.00th=[ 66], 5.00th=[ 69], 10.00th=[ 72], 20.00th=[ 77], 01:03:56.840 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 99], 01:03:56.840 | 70.00th=[ 106], 80.00th=[ 115], 90.00th=[ 124], 95.00th=[ 131], 01:03:56.840 | 99.00th=[ 147], 99.50th=[ 151], 99.90th=[ 167], 99.95th=[ 172], 01:03:56.840 | 99.99th=[ 603] 01:03:56.840 bw ( KiB/s): min=14584, max=14584, per=100.00%, avg=14584.00, stdev= 0.00, samples=1 01:03:56.840 iops : min= 3646, max= 3646, avg=3646.00, stdev= 0.00, samples=1 01:03:56.840 lat (usec) : 100=31.79%, 250=67.96%, 500=0.06%, 750=0.04% 01:03:56.840 lat (msec) : 4=0.14%, 10=0.01% 01:03:56.840 cpu : usr=1.20%, sys=6.20%, ctx=6984, majf=0, minf=5 01:03:56.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:03:56.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:56.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:03:56.840 issued rwts: total=3400,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:03:56.840 latency : target=0, window=0, percentile=100.00%, depth=1 01:03:56.840 01:03:56.840 Run status group 0 (all jobs): 01:03:56.840 READ: bw=13.3MiB/s (13.9MB/s), 13.3MiB/s-13.3MiB/s (13.9MB/s-13.9MB/s), io=13.3MiB (13.9MB), run=1001-1001msec 01:03:56.840 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 01:03:56.840 01:03:56.840 Disk stats (read/write): 01:03:56.840 nvme0n1: ios=3122/3124, merge=0/0, ticks=554/313, in_queue=867, util=91.27% 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:03:56.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:56.840 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:57.099 rmmod nvme_tcp 01:03:57.099 rmmod nvme_fabrics 01:03:57.099 rmmod nvme_keyring 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65692 ']' 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65692 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65692 ']' 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65692 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65692 01:03:57.099 killing process with pid 65692 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65692' 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65692 01:03:57.099 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65692 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:57.358 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:57.617 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:57.617 06:02:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 01:03:57.617 01:03:57.617 real 0m6.559s 01:03:57.617 user 0m19.575s 01:03:57.617 sys 0m2.336s 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:57.617 ************************************ 01:03:57.617 END TEST nvmf_nmic 01:03:57.617 ************************************ 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:57.617 06:02:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:03:57.876 ************************************ 01:03:57.876 START TEST nvmf_fio_target 01:03:57.876 ************************************ 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:03:57.876 * Looking for test storage... 01:03:57.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:57.876 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 01:03:57.877 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:57.877 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:57.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:57.877 --rc genhtml_branch_coverage=1 01:03:57.877 --rc genhtml_function_coverage=1 01:03:57.877 --rc genhtml_legend=1 01:03:57.877 --rc geninfo_all_blocks=1 01:03:57.877 --rc geninfo_unexecuted_blocks=1 01:03:57.877 01:03:57.877 ' 01:03:57.877 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:57.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:57.877 --rc genhtml_branch_coverage=1 01:03:57.877 --rc genhtml_function_coverage=1 01:03:57.877 --rc genhtml_legend=1 01:03:57.877 --rc geninfo_all_blocks=1 01:03:57.877 --rc geninfo_unexecuted_blocks=1 01:03:57.877 01:03:57.877 ' 01:03:57.877 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:57.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:57.877 --rc genhtml_branch_coverage=1 01:03:57.877 --rc genhtml_function_coverage=1 01:03:57.877 --rc genhtml_legend=1 01:03:57.877 --rc geninfo_all_blocks=1 01:03:57.877 --rc geninfo_unexecuted_blocks=1 01:03:57.877 01:03:57.877 ' 01:03:57.877 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:57.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:57.877 --rc genhtml_branch_coverage=1 01:03:57.877 --rc genhtml_function_coverage=1 01:03:57.877 --rc genhtml_legend=1 01:03:57.877 --rc geninfo_all_blocks=1 01:03:57.877 --rc geninfo_unexecuted_blocks=1 01:03:57.877 01:03:57.877 ' 01:03:57.877 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:57.877 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:58.136 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:58.137 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:58.137 Cannot find device "nvmf_init_br" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:58.137 Cannot find device "nvmf_init_br2" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:58.137 Cannot find device "nvmf_tgt_br" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:58.137 Cannot find device "nvmf_tgt_br2" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:58.137 Cannot find device "nvmf_init_br" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:58.137 Cannot find device "nvmf_init_br2" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:58.137 Cannot find device "nvmf_tgt_br" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:58.137 Cannot find device "nvmf_tgt_br2" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:58.137 Cannot find device "nvmf_br" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:58.137 Cannot find device "nvmf_init_if" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:58.137 Cannot find device "nvmf_init_if2" 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 01:03:58.137 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:58.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:58.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:58.397 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:58.656 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:58.656 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:58.656 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:58.656 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:58.656 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:58.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:58.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 01:03:58.657 01:03:58.657 --- 10.0.0.3 ping statistics --- 01:03:58.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:58.657 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:58.657 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:58.657 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.125 ms 01:03:58.657 01:03:58.657 --- 10.0.0.4 ping statistics --- 01:03:58.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:58.657 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:58.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:58.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 01:03:58.657 01:03:58.657 --- 10.0.0.1 ping statistics --- 01:03:58.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:58.657 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:58.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:58.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 01:03:58.657 01:03:58.657 --- 10.0.0.2 ping statistics --- 01:03:58.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:58.657 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66016 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66016 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66016 ']' 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:58.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:03:58.657 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:58.657 [2024-12-09 06:02:53.132589] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:58.657 [2024-12-09 06:02:53.132648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:58.916 [2024-12-09 06:02:53.285612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:58.916 [2024-12-09 06:02:53.324651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:58.916 [2024-12-09 06:02:53.324698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:58.916 [2024-12-09 06:02:53.324708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:58.916 [2024-12-09 06:02:53.324715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:58.916 [2024-12-09 06:02:53.324722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:58.916 [2024-12-09 06:02:53.325720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:58.916 [2024-12-09 06:02:53.325816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:58.916 [2024-12-09 06:02:53.326825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:58.916 [2024-12-09 06:02:53.326827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:58.916 [2024-12-09 06:02:53.368432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:03:59.485 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:59.485 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 01:03:59.485 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:59.485 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:59.485 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:03:59.485 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:59.485 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:03:59.744 [2024-12-09 06:02:54.242304] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:59.744 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:04:00.002 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 01:04:00.003 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:04:00.261 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 01:04:00.261 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:04:00.520 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 01:04:00.520 06:02:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:04:00.778 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 01:04:00.778 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 01:04:00.778 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:04:01.037 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 01:04:01.037 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:04:01.296 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 01:04:01.296 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:04:01.554 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 01:04:01.555 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 01:04:01.813 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:04:01.813 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:04:01.813 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:04:02.072 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:04:02.072 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:04:02.331 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:04:02.590 [2024-12-09 06:02:56.938771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:02.590 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 01:04:02.590 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 01:04:02.849 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid=bac40580-41f0-4da4-8cd9-1be4901a67b8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:04:03.108 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 01:04:03.108 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 01:04:03.108 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:04:03.108 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 01:04:03.108 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 01:04:03.108 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 01:04:05.012 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:04:05.012 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:04:05.012 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:04:05.013 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 01:04:05.013 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:04:05.013 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 01:04:05.013 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:04:05.013 [global] 01:04:05.013 thread=1 01:04:05.013 invalidate=1 01:04:05.013 rw=write 01:04:05.013 time_based=1 01:04:05.013 runtime=1 01:04:05.013 ioengine=libaio 01:04:05.013 direct=1 01:04:05.013 bs=4096 01:04:05.013 iodepth=1 01:04:05.013 norandommap=0 01:04:05.013 numjobs=1 01:04:05.013 01:04:05.013 verify_dump=1 01:04:05.013 verify_backlog=512 01:04:05.013 verify_state_save=0 01:04:05.013 do_verify=1 01:04:05.013 verify=crc32c-intel 01:04:05.013 [job0] 01:04:05.013 filename=/dev/nvme0n1 01:04:05.013 [job1] 01:04:05.013 filename=/dev/nvme0n2 01:04:05.013 [job2] 01:04:05.013 filename=/dev/nvme0n3 01:04:05.013 [job3] 01:04:05.013 filename=/dev/nvme0n4 01:04:05.271 Could not set queue depth (nvme0n1) 01:04:05.271 Could not set queue depth (nvme0n2) 01:04:05.271 Could not set queue depth (nvme0n3) 01:04:05.271 Could not set queue depth (nvme0n4) 01:04:05.271 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:05.271 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:05.271 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:05.271 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:05.271 fio-3.35 01:04:05.271 Starting 4 threads 01:04:06.690 01:04:06.690 job0: (groupid=0, jobs=1): err= 0: pid=66195: Mon Dec 9 06:03:00 2024 01:04:06.690 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 01:04:06.690 slat (nsec): min=7115, max=32547, avg=8348.65, stdev=1957.86 01:04:06.690 clat (usec): min=110, max=686, avg=178.42, stdev=26.84 01:04:06.690 lat (usec): min=117, max=694, avg=186.77, stdev=27.29 01:04:06.690 clat percentiles (usec): 01:04:06.690 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 01:04:06.690 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 01:04:06.690 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 223], 01:04:06.690 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 289], 99.95th=[ 611], 01:04:06.690 | 99.99th=[ 685] 01:04:06.690 write: IOPS=3194, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 01:04:06.690 slat (usec): min=10, max=140, avg=13.05, stdev= 5.97 01:04:06.690 clat (usec): min=70, max=258, avg=118.36, stdev=22.12 01:04:06.690 lat (usec): min=81, max=383, avg=131.41, stdev=24.92 01:04:06.690 clat percentiles (usec): 01:04:06.690 | 1.00th=[ 81], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 101], 01:04:06.690 | 30.00th=[ 106], 40.00th=[ 111], 50.00th=[ 115], 60.00th=[ 119], 01:04:06.690 | 70.00th=[ 125], 80.00th=[ 133], 90.00th=[ 149], 95.00th=[ 161], 01:04:06.690 | 99.00th=[ 184], 99.50th=[ 202], 99.90th=[ 241], 99.95th=[ 251], 01:04:06.690 | 99.99th=[ 260] 01:04:06.690 bw ( KiB/s): min=12288, max=12288, per=38.90%, avg=12288.00, stdev= 0.00, samples=1 01:04:06.690 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:04:06.690 lat (usec) : 100=9.57%, 250=90.19%, 500=0.19%, 750=0.05% 01:04:06.690 cpu : usr=1.60%, sys=5.60%, ctx=6272, majf=0, minf=13 01:04:06.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:06.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:06.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:06.690 issued rwts: total=3072,3198,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:06.690 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:06.690 job1: (groupid=0, jobs=1): err= 0: pid=66196: Mon Dec 9 06:03:00 2024 01:04:06.690 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:04:06.690 slat (nsec): min=6764, max=37809, avg=14006.60, stdev=4642.66 01:04:06.690 clat (usec): min=188, max=684, avg=336.56, stdev=31.83 01:04:06.690 lat (usec): min=205, max=699, avg=350.57, stdev=31.57 01:04:06.690 clat percentiles (usec): 01:04:06.690 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 314], 01:04:06.690 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 01:04:06.690 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 383], 01:04:06.690 | 99.00th=[ 424], 99.50th=[ 449], 99.90th=[ 537], 99.95th=[ 685], 01:04:06.690 | 99.99th=[ 685] 01:04:06.690 write: IOPS=1569, BW=6278KiB/s (6428kB/s)(6284KiB/1001msec); 0 zone resets 01:04:06.690 slat (usec): min=12, max=105, avg=30.49, stdev=11.18 01:04:06.690 clat (usec): min=106, max=2514, avg=259.77, stdev=66.37 01:04:06.690 lat (usec): min=145, max=2532, avg=290.25, stdev=67.65 01:04:06.690 clat percentiles (usec): 01:04:06.690 | 1.00th=[ 184], 5.00th=[ 206], 10.00th=[ 221], 20.00th=[ 233], 01:04:06.690 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 01:04:06.690 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 01:04:06.690 | 99.00th=[ 347], 99.50th=[ 412], 99.90th=[ 506], 99.95th=[ 2507], 01:04:06.690 | 99.99th=[ 2507] 01:04:06.690 bw ( KiB/s): min= 8192, max= 8192, per=25.93%, avg=8192.00, stdev= 0.00, samples=1 01:04:06.691 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:04:06.691 lat (usec) : 250=20.73%, 500=79.08%, 750=0.16% 01:04:06.691 lat (msec) : 4=0.03% 01:04:06.691 cpu : usr=2.00%, sys=5.30%, ctx=3108, majf=0, minf=12 01:04:06.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:06.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:06.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:06.691 issued rwts: total=1536,1571,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:06.691 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:06.691 job2: (groupid=0, jobs=1): err= 0: pid=66197: Mon Dec 9 06:03:00 2024 01:04:06.691 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:04:06.691 slat (nsec): min=14835, max=60272, avg=23996.59, stdev=5003.65 01:04:06.691 clat (usec): min=239, max=1463, avg=330.66, stdev=57.54 01:04:06.691 lat (usec): min=270, max=1490, avg=354.66, stdev=58.29 01:04:06.691 clat percentiles (usec): 01:04:06.691 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 01:04:06.691 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 01:04:06.691 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 375], 01:04:06.691 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[ 709], 99.95th=[ 1467], 01:04:06.691 | 99.99th=[ 1467] 01:04:06.691 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 01:04:06.691 slat (usec): min=20, max=100, avg=40.68, stdev= 7.41 01:04:06.691 clat (usec): min=111, max=4985, avg=250.76, stdev=126.80 01:04:06.691 lat (usec): min=143, max=5042, avg=291.44, stdev=127.70 01:04:06.691 clat percentiles (usec): 01:04:06.691 | 1.00th=[ 143], 5.00th=[ 196], 10.00th=[ 208], 20.00th=[ 223], 01:04:06.691 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 01:04:06.691 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 302], 01:04:06.691 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 963], 99.95th=[ 5014], 01:04:06.691 | 99.99th=[ 5014] 01:04:06.691 bw ( KiB/s): min= 8192, max= 8192, per=25.93%, avg=8192.00, stdev= 0.00, samples=1 01:04:06.691 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:04:06.691 lat (usec) : 250=26.53%, 500=72.33%, 750=1.04%, 1000=0.03% 01:04:06.691 lat (msec) : 2=0.03%, 10=0.03% 01:04:06.691 cpu : usr=2.50%, sys=7.60%, ctx=3072, majf=0, minf=15 01:04:06.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:06.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:06.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:06.691 issued rwts: total=1536,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:06.691 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:06.691 job3: (groupid=0, jobs=1): err= 0: pid=66198: Mon Dec 9 06:03:00 2024 01:04:06.691 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:04:06.691 slat (nsec): min=9608, max=76874, avg=18009.16, stdev=7159.25 01:04:06.691 clat (usec): min=233, max=503, avg=330.43, stdev=33.42 01:04:06.691 lat (usec): min=254, max=517, avg=348.44, stdev=32.44 01:04:06.691 clat percentiles (usec): 01:04:06.691 | 1.00th=[ 260], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 306], 01:04:06.691 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 01:04:06.691 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 375], 01:04:06.691 | 99.00th=[ 465], 99.50th=[ 478], 99.90th=[ 494], 99.95th=[ 502], 01:04:06.691 | 99.99th=[ 502] 01:04:06.691 write: IOPS=1599, BW=6398KiB/s (6551kB/s)(6404KiB/1001msec); 0 zone resets 01:04:06.691 slat (usec): min=8, max=103, avg=31.12, stdev=11.85 01:04:06.691 clat (usec): min=131, max=2426, avg=255.25, stdev=75.57 01:04:06.691 lat (usec): min=148, max=2441, avg=286.37, stdev=77.01 01:04:06.691 clat percentiles (usec): 01:04:06.691 | 1.00th=[ 163], 5.00th=[ 194], 10.00th=[ 210], 20.00th=[ 227], 01:04:06.691 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 262], 01:04:06.691 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 01:04:06.691 | 99.00th=[ 326], 99.50th=[ 379], 99.90th=[ 1516], 99.95th=[ 2442], 01:04:06.691 | 99.99th=[ 2442] 01:04:06.691 bw ( KiB/s): min= 8192, max= 8192, per=25.93%, avg=8192.00, stdev= 0.00, samples=1 01:04:06.691 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:04:06.691 lat (usec) : 250=23.24%, 500=76.57%, 750=0.10% 01:04:06.691 lat (msec) : 2=0.06%, 4=0.03% 01:04:06.691 cpu : usr=1.50%, sys=6.90%, ctx=3137, majf=0, minf=9 01:04:06.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:06.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:06.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:06.691 issued rwts: total=1536,1601,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:06.691 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:06.691 01:04:06.691 Run status group 0 (all jobs): 01:04:06.691 READ: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 01:04:06.691 WRITE: bw=30.9MiB/s (32.3MB/s), 6138KiB/s-12.5MiB/s (6285kB/s-13.1MB/s), io=30.9MiB (32.4MB), run=1001-1001msec 01:04:06.691 01:04:06.691 Disk stats (read/write): 01:04:06.691 nvme0n1: ios=2610/2824, merge=0/0, ticks=493/355, in_queue=848, util=88.49% 01:04:06.691 nvme0n2: ios=1232/1536, merge=0/0, ticks=409/395, in_queue=804, util=88.98% 01:04:06.691 nvme0n3: ios=1182/1536, merge=0/0, ticks=410/414, in_queue=824, util=89.58% 01:04:06.691 nvme0n4: ios=1198/1536, merge=0/0, ticks=396/406, in_queue=802, util=89.83% 01:04:06.691 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 01:04:06.691 [global] 01:04:06.691 thread=1 01:04:06.691 invalidate=1 01:04:06.691 rw=randwrite 01:04:06.691 time_based=1 01:04:06.691 runtime=1 01:04:06.691 ioengine=libaio 01:04:06.691 direct=1 01:04:06.691 bs=4096 01:04:06.691 iodepth=1 01:04:06.691 norandommap=0 01:04:06.691 numjobs=1 01:04:06.691 01:04:06.691 verify_dump=1 01:04:06.691 verify_backlog=512 01:04:06.691 verify_state_save=0 01:04:06.691 do_verify=1 01:04:06.691 verify=crc32c-intel 01:04:06.691 [job0] 01:04:06.691 filename=/dev/nvme0n1 01:04:06.691 [job1] 01:04:06.691 filename=/dev/nvme0n2 01:04:06.691 [job2] 01:04:06.691 filename=/dev/nvme0n3 01:04:06.691 [job3] 01:04:06.691 filename=/dev/nvme0n4 01:04:06.691 Could not set queue depth (nvme0n1) 01:04:06.691 Could not set queue depth (nvme0n2) 01:04:06.691 Could not set queue depth (nvme0n3) 01:04:06.691 Could not set queue depth (nvme0n4) 01:04:06.691 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:06.691 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:06.691 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:06.691 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:06.691 fio-3.35 01:04:06.691 Starting 4 threads 01:04:08.085 01:04:08.085 job0: (groupid=0, jobs=1): err= 0: pid=66251: Mon Dec 9 06:03:02 2024 01:04:08.085 read: IOPS=1800, BW=7201KiB/s (7374kB/s)(7208KiB/1001msec) 01:04:08.085 slat (nsec): min=9211, max=40116, avg=13558.78, stdev=3759.73 01:04:08.085 clat (usec): min=179, max=1579, avg=297.60, stdev=57.61 01:04:08.085 lat (usec): min=195, max=1594, avg=311.16, stdev=57.72 01:04:08.085 clat percentiles (usec): 01:04:08.085 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 237], 20.00th=[ 255], 01:04:08.085 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 01:04:08.085 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 371], 01:04:08.085 | 99.00th=[ 433], 99.50th=[ 478], 99.90th=[ 709], 99.95th=[ 1582], 01:04:08.085 | 99.99th=[ 1582] 01:04:08.085 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 01:04:08.085 slat (nsec): min=13265, max=78634, avg=23395.12, stdev=8231.93 01:04:08.085 clat (usec): min=99, max=332, avg=188.48, stdev=46.44 01:04:08.085 lat (usec): min=119, max=364, avg=211.88, stdev=49.03 01:04:08.086 clat percentiles (usec): 01:04:08.086 | 1.00th=[ 114], 5.00th=[ 125], 10.00th=[ 135], 20.00th=[ 145], 01:04:08.086 | 30.00th=[ 157], 40.00th=[ 167], 50.00th=[ 180], 60.00th=[ 196], 01:04:08.086 | 70.00th=[ 215], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 269], 01:04:08.086 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 322], 99.95th=[ 326], 01:04:08.086 | 99.99th=[ 334] 01:04:08.086 bw ( KiB/s): min= 8192, max= 8192, per=25.93%, avg=8192.00, stdev= 0.00, samples=1 01:04:08.086 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:04:08.086 lat (usec) : 100=0.03%, 250=54.31%, 500=45.51%, 750=0.13% 01:04:08.086 lat (msec) : 2=0.03% 01:04:08.086 cpu : usr=1.20%, sys=5.80%, ctx=3850, majf=0, minf=13 01:04:08.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:08.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:08.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:08.086 issued rwts: total=1802,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:08.086 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:08.086 job1: (groupid=0, jobs=1): err= 0: pid=66252: Mon Dec 9 06:03:02 2024 01:04:08.086 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 01:04:08.086 slat (usec): min=22, max=431, avg=29.35, stdev=17.98 01:04:08.086 clat (usec): min=208, max=707, avg=458.73, stdev=98.40 01:04:08.086 lat (usec): min=234, max=775, avg=488.07, stdev=99.00 01:04:08.086 clat percentiles (usec): 01:04:08.086 | 1.00th=[ 265], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 359], 01:04:08.086 | 30.00th=[ 379], 40.00th=[ 433], 50.00th=[ 478], 60.00th=[ 502], 01:04:08.086 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 611], 01:04:08.086 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 693], 99.95th=[ 709], 01:04:08.086 | 99.99th=[ 709] 01:04:08.086 write: IOPS=1463, BW=5854KiB/s (5995kB/s)(5860KiB/1001msec); 0 zone resets 01:04:08.086 slat (usec): min=28, max=239, avg=43.74, stdev= 8.55 01:04:08.086 clat (usec): min=122, max=2134, avg=292.55, stdev=92.57 01:04:08.086 lat (usec): min=161, max=2180, avg=336.28, stdev=93.11 01:04:08.086 clat percentiles (usec): 01:04:08.086 | 1.00th=[ 151], 5.00th=[ 202], 10.00th=[ 221], 20.00th=[ 237], 01:04:08.086 | 30.00th=[ 249], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 293], 01:04:08.086 | 70.00th=[ 314], 80.00th=[ 347], 90.00th=[ 392], 95.00th=[ 420], 01:04:08.086 | 99.00th=[ 469], 99.50th=[ 482], 99.90th=[ 1827], 99.95th=[ 2147], 01:04:08.086 | 99.99th=[ 2147] 01:04:08.086 bw ( KiB/s): min= 5904, max= 5904, per=18.69%, avg=5904.00, stdev= 0.00, samples=1 01:04:08.086 iops : min= 1476, max= 1476, avg=1476.00, stdev= 0.00, samples=1 01:04:08.086 lat (usec) : 250=18.36%, 500=64.97%, 750=16.59% 01:04:08.086 lat (msec) : 2=0.04%, 4=0.04% 01:04:08.086 cpu : usr=1.40%, sys=8.10%, ctx=2501, majf=0, minf=17 01:04:08.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:08.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:08.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:08.086 issued rwts: total=1024,1465,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:08.086 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:08.086 job2: (groupid=0, jobs=1): err= 0: pid=66253: Mon Dec 9 06:03:02 2024 01:04:08.086 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:04:08.086 slat (nsec): min=15860, max=63730, avg=23920.71, stdev=4681.82 01:04:08.086 clat (usec): min=209, max=391, avg=301.70, stdev=29.57 01:04:08.086 lat (usec): min=234, max=414, avg=325.62, stdev=30.09 01:04:08.086 clat percentiles (usec): 01:04:08.086 | 1.00th=[ 239], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 277], 01:04:08.086 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 01:04:08.086 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 351], 01:04:08.086 | 99.00th=[ 371], 99.50th=[ 375], 99.90th=[ 388], 99.95th=[ 392], 01:04:08.086 | 99.99th=[ 392] 01:04:08.086 write: IOPS=1794, BW=7177KiB/s (7349kB/s)(7184KiB/1001msec); 0 zone resets 01:04:08.086 slat (nsec): min=20165, max=96442, avg=36317.40, stdev=7793.43 01:04:08.086 clat (usec): min=120, max=1374, avg=237.22, stdev=49.77 01:04:08.086 lat (usec): min=144, max=1411, avg=273.54, stdev=51.82 01:04:08.086 clat percentiles (usec): 01:04:08.086 | 1.00th=[ 174], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 208], 01:04:08.086 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 01:04:08.086 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 289], 01:04:08.086 | 99.00th=[ 355], 99.50th=[ 396], 99.90th=[ 1237], 99.95th=[ 1369], 01:04:08.086 | 99.99th=[ 1369] 01:04:08.086 bw ( KiB/s): min= 8192, max= 8192, per=25.93%, avg=8192.00, stdev= 0.00, samples=1 01:04:08.086 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:04:08.086 lat (usec) : 250=39.11%, 500=60.80%, 750=0.03% 01:04:08.086 lat (msec) : 2=0.06% 01:04:08.086 cpu : usr=2.30%, sys=8.40%, ctx=3333, majf=0, minf=5 01:04:08.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:08.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:08.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:08.086 issued rwts: total=1536,1796,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:08.086 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:08.086 job3: (groupid=0, jobs=1): err= 0: pid=66254: Mon Dec 9 06:03:02 2024 01:04:08.086 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 01:04:08.086 slat (nsec): min=7206, max=24233, avg=7902.89, stdev=1519.51 01:04:08.086 clat (usec): min=143, max=2130, avg=214.37, stdev=46.30 01:04:08.086 lat (usec): min=150, max=2138, avg=222.28, stdev=46.33 01:04:08.086 clat percentiles (usec): 01:04:08.086 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 192], 01:04:08.086 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 01:04:08.086 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 255], 01:04:08.086 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 545], 99.95th=[ 725], 01:04:08.086 | 99.99th=[ 2147] 01:04:08.086 write: IOPS=2595, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 01:04:08.086 slat (usec): min=10, max=104, avg=12.70, stdev= 5.51 01:04:08.086 clat (usec): min=95, max=2891, avg=151.44, stdev=62.51 01:04:08.086 lat (usec): min=107, max=2916, avg=164.14, stdev=63.30 01:04:08.086 clat percentiles (usec): 01:04:08.086 | 1.00th=[ 112], 5.00th=[ 119], 10.00th=[ 124], 20.00th=[ 130], 01:04:08.086 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 01:04:08.086 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 190], 01:04:08.086 | 99.00th=[ 221], 99.50th=[ 239], 99.90th=[ 857], 99.95th=[ 906], 01:04:08.086 | 99.99th=[ 2900] 01:04:08.086 bw ( KiB/s): min=12288, max=12288, per=38.89%, avg=12288.00, stdev= 0.00, samples=1 01:04:08.086 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:04:08.086 lat (usec) : 100=0.06%, 250=96.30%, 500=3.51%, 750=0.06%, 1000=0.04% 01:04:08.086 lat (msec) : 4=0.04% 01:04:08.086 cpu : usr=1.10%, sys=4.60%, ctx=5158, majf=0, minf=13 01:04:08.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:08.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:08.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:08.086 issued rwts: total=2560,2598,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:08.086 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:08.086 01:04:08.086 Run status group 0 (all jobs): 01:04:08.086 READ: bw=27.0MiB/s (28.3MB/s), 4092KiB/s-9.99MiB/s (4190kB/s-10.5MB/s), io=27.0MiB (28.4MB), run=1001-1001msec 01:04:08.086 WRITE: bw=30.9MiB/s (32.4MB/s), 5854KiB/s-10.1MiB/s (5995kB/s-10.6MB/s), io=30.9MiB (32.4MB), run=1001-1001msec 01:04:08.086 01:04:08.086 Disk stats (read/write): 01:04:08.086 nvme0n1: ios=1585/1817, merge=0/0, ticks=496/360, in_queue=856, util=88.97% 01:04:08.086 nvme0n2: ios=1065/1078, merge=0/0, ticks=503/339, in_queue=842, util=89.48% 01:04:08.086 nvme0n3: ios=1349/1536, merge=0/0, ticks=403/384, in_queue=787, util=89.43% 01:04:08.086 nvme0n4: ios=2048/2492, merge=0/0, ticks=437/392, in_queue=829, util=89.89% 01:04:08.086 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 01:04:08.086 [global] 01:04:08.086 thread=1 01:04:08.086 invalidate=1 01:04:08.086 rw=write 01:04:08.086 time_based=1 01:04:08.086 runtime=1 01:04:08.086 ioengine=libaio 01:04:08.086 direct=1 01:04:08.086 bs=4096 01:04:08.086 iodepth=128 01:04:08.086 norandommap=0 01:04:08.086 numjobs=1 01:04:08.086 01:04:08.086 verify_dump=1 01:04:08.086 verify_backlog=512 01:04:08.086 verify_state_save=0 01:04:08.086 do_verify=1 01:04:08.086 verify=crc32c-intel 01:04:08.086 [job0] 01:04:08.086 filename=/dev/nvme0n1 01:04:08.086 [job1] 01:04:08.086 filename=/dev/nvme0n2 01:04:08.086 [job2] 01:04:08.086 filename=/dev/nvme0n3 01:04:08.086 [job3] 01:04:08.086 filename=/dev/nvme0n4 01:04:08.086 Could not set queue depth (nvme0n1) 01:04:08.086 Could not set queue depth (nvme0n2) 01:04:08.086 Could not set queue depth (nvme0n3) 01:04:08.086 Could not set queue depth (nvme0n4) 01:04:08.086 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:04:08.086 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:04:08.086 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:04:08.086 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:04:08.086 fio-3.35 01:04:08.086 Starting 4 threads 01:04:09.461 01:04:09.461 job0: (groupid=0, jobs=1): err= 0: pid=66312: Mon Dec 9 06:03:03 2024 01:04:09.461 read: IOPS=3514, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1004msec) 01:04:09.461 slat (usec): min=9, max=7535, avg=139.33, stdev=603.12 01:04:09.461 clat (usec): min=1479, max=25268, avg=18180.96, stdev=2157.18 01:04:09.461 lat (usec): min=3615, max=25358, avg=18320.30, stdev=2148.89 01:04:09.461 clat percentiles (usec): 01:04:09.461 | 1.00th=[ 8356], 5.00th=[15533], 10.00th=[16319], 20.00th=[17171], 01:04:09.461 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18744], 01:04:09.461 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20055], 95.00th=[21103], 01:04:09.461 | 99.00th=[23200], 99.50th=[23725], 99.90th=[24773], 99.95th=[25297], 01:04:09.461 | 99.99th=[25297] 01:04:09.461 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 01:04:09.461 slat (usec): min=23, max=8591, avg=129.76, stdev=742.97 01:04:09.461 clat (usec): min=10565, max=25596, avg=17482.63, stdev=1917.75 01:04:09.461 lat (usec): min=10621, max=25638, avg=17612.39, stdev=2038.95 01:04:09.461 clat percentiles (usec): 01:04:09.461 | 1.00th=[12256], 5.00th=[15008], 10.00th=[15270], 20.00th=[15926], 01:04:09.461 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 01:04:09.461 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19268], 95.00th=[21103], 01:04:09.461 | 99.00th=[23987], 99.50th=[24249], 99.90th=[25560], 99.95th=[25560], 01:04:09.461 | 99.99th=[25560] 01:04:09.461 bw ( KiB/s): min=13048, max=15624, per=39.58%, avg=14336.00, stdev=1821.51, samples=2 01:04:09.461 iops : min= 3262, max= 3906, avg=3584.00, stdev=455.38, samples=2 01:04:09.461 lat (msec) : 2=0.01%, 4=0.20%, 10=0.49%, 20=90.36%, 50=8.94% 01:04:09.461 cpu : usr=4.89%, sys=14.26%, ctx=225, majf=0, minf=1 01:04:09.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 01:04:09.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:09.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:09.461 issued rwts: total=3529,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:09.461 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:09.461 job1: (groupid=0, jobs=1): err= 0: pid=66313: Mon Dec 9 06:03:03 2024 01:04:09.461 read: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec) 01:04:09.461 slat (usec): min=10, max=17818, avg=370.60, stdev=2005.67 01:04:09.461 clat (usec): min=25016, max=66620, avg=47690.35, stdev=8037.04 01:04:09.461 lat (usec): min=32773, max=66639, avg=48060.95, stdev=7852.23 01:04:09.461 clat percentiles (usec): 01:04:09.461 | 1.00th=[32900], 5.00th=[36963], 10.00th=[40109], 20.00th=[42730], 01:04:09.461 | 30.00th=[43254], 40.00th=[43779], 50.00th=[44303], 60.00th=[45876], 01:04:09.461 | 70.00th=[52167], 80.00th=[56886], 90.00th=[60556], 95.00th=[63177], 01:04:09.461 | 99.00th=[66847], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 01:04:09.461 | 99.99th=[66847] 01:04:09.461 write: IOPS=1626, BW=6506KiB/s (6662kB/s)(6532KiB/1004msec); 0 zone resets 01:04:09.461 slat (usec): min=23, max=11241, avg=252.19, stdev=1246.08 01:04:09.461 clat (usec): min=2432, max=51095, avg=31920.64, stdev=5945.82 01:04:09.461 lat (usec): min=9694, max=51149, avg=32172.83, stdev=5814.04 01:04:09.461 clat percentiles (usec): 01:04:09.461 | 1.00th=[10290], 5.00th=[24249], 10.00th=[28181], 20.00th=[29230], 01:04:09.461 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30278], 60.00th=[32637], 01:04:09.461 | 70.00th=[34341], 80.00th=[36439], 90.00th=[38011], 95.00th=[39584], 01:04:09.461 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 01:04:09.461 | 99.99th=[51119] 01:04:09.461 bw ( KiB/s): min= 4096, max= 8208, per=16.99%, avg=6152.00, stdev=2907.62, samples=2 01:04:09.461 iops : min= 1024, max= 2052, avg=1538.00, stdev=726.91, samples=2 01:04:09.461 lat (msec) : 4=0.03%, 10=0.25%, 20=1.77%, 50=82.23%, 100=15.71% 01:04:09.461 cpu : usr=2.49%, sys=7.18%, ctx=101, majf=0, minf=5 01:04:09.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 01:04:09.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:09.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:09.461 issued rwts: total=1536,1633,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:09.461 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:09.461 job2: (groupid=0, jobs=1): err= 0: pid=66314: Mon Dec 9 06:03:03 2024 01:04:09.461 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 01:04:09.461 slat (usec): min=9, max=15391, avg=268.99, stdev=1544.17 01:04:09.461 clat (usec): min=16299, max=60567, avg=35055.98, stdev=11913.23 01:04:09.461 lat (usec): min=23320, max=60592, avg=35324.96, stdev=11913.81 01:04:09.461 clat percentiles (usec): 01:04:09.461 | 1.00th=[22414], 5.00th=[23725], 10.00th=[24511], 20.00th=[25035], 01:04:09.461 | 30.00th=[26346], 40.00th=[30016], 50.00th=[30540], 60.00th=[31327], 01:04:09.461 | 70.00th=[35390], 80.00th=[47973], 90.00th=[58459], 95.00th=[58983], 01:04:09.461 | 99.00th=[60556], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 01:04:09.461 | 99.99th=[60556] 01:04:09.461 write: IOPS=2200, BW=8801KiB/s (9012kB/s)(8836KiB/1004msec); 0 zone resets 01:04:09.461 slat (usec): min=21, max=13603, avg=192.06, stdev=1013.49 01:04:09.461 clat (usec): min=2392, max=43402, avg=24656.28, stdev=7148.70 01:04:09.461 lat (usec): min=9841, max=43435, avg=24848.33, stdev=7124.83 01:04:09.461 clat percentiles (usec): 01:04:09.461 | 1.00th=[10683], 5.00th=[16909], 10.00th=[17957], 20.00th=[18744], 01:04:09.461 | 30.00th=[19268], 40.00th=[19530], 50.00th=[23200], 60.00th=[24249], 01:04:09.461 | 70.00th=[27132], 80.00th=[33424], 90.00th=[35390], 95.00th=[36963], 01:04:09.461 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 01:04:09.461 | 99.99th=[43254] 01:04:09.461 bw ( KiB/s): min= 8208, max= 8456, per=23.00%, avg=8332.00, stdev=175.36, samples=2 01:04:09.461 iops : min= 2052, max= 2114, avg=2083.00, stdev=43.84, samples=2 01:04:09.461 lat (msec) : 4=0.02%, 10=0.12%, 20=22.15%, 50=69.70%, 100=8.01% 01:04:09.461 cpu : usr=2.49%, sys=9.07%, ctx=164, majf=0, minf=1 01:04:09.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 01:04:09.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:09.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:09.461 issued rwts: total=2048,2209,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:09.461 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:09.461 job3: (groupid=0, jobs=1): err= 0: pid=66315: Mon Dec 9 06:03:03 2024 01:04:09.461 read: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec) 01:04:09.461 slat (usec): min=10, max=17832, avg=370.26, stdev=1998.93 01:04:09.461 clat (usec): min=25365, max=66608, avg=47748.16, stdev=8051.66 01:04:09.461 lat (usec): min=33213, max=66628, avg=48118.42, stdev=7869.44 01:04:09.461 clat percentiles (usec): 01:04:09.461 | 1.00th=[33162], 5.00th=[37487], 10.00th=[37487], 20.00th=[42730], 01:04:09.461 | 30.00th=[43254], 40.00th=[43779], 50.00th=[44303], 60.00th=[46400], 01:04:09.461 | 70.00th=[52167], 80.00th=[56886], 90.00th=[60556], 95.00th=[63701], 01:04:09.461 | 99.00th=[66323], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 01:04:09.461 | 99.99th=[66847] 01:04:09.461 write: IOPS=1661, BW=6647KiB/s (6806kB/s)(6660KiB/1002msec); 0 zone resets 01:04:09.461 slat (usec): min=23, max=11673, avg=248.26, stdev=1237.82 01:04:09.461 clat (usec): min=143, max=50707, avg=31172.47, stdev=7186.22 01:04:09.461 lat (usec): min=1420, max=50743, avg=31420.73, stdev=7094.55 01:04:09.461 clat percentiles (usec): 01:04:09.461 | 1.00th=[ 2073], 5.00th=[18220], 10.00th=[27657], 20.00th=[29230], 01:04:09.461 | 30.00th=[29492], 40.00th=[29492], 50.00th=[30016], 60.00th=[32113], 01:04:09.461 | 70.00th=[33817], 80.00th=[36439], 90.00th=[37487], 95.00th=[38536], 01:04:09.461 | 99.00th=[50594], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 01:04:09.461 | 99.99th=[50594] 01:04:09.461 bw ( KiB/s): min= 8192, max= 8192, per=22.62%, avg=8192.00, stdev= 0.00, samples=1 01:04:09.461 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:04:09.461 lat (usec) : 250=0.03% 01:04:09.461 lat (msec) : 2=0.44%, 4=0.56%, 10=0.94%, 20=1.06%, 50=81.41% 01:04:09.462 lat (msec) : 100=15.56% 01:04:09.462 cpu : usr=2.30%, sys=5.99%, ctx=107, majf=0, minf=6 01:04:09.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 01:04:09.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:09.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:09.462 issued rwts: total=1536,1665,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:09.462 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:09.462 01:04:09.462 Run status group 0 (all jobs): 01:04:09.462 READ: bw=33.7MiB/s (35.3MB/s), 6120KiB/s-13.7MiB/s (6266kB/s-14.4MB/s), io=33.8MiB (35.4MB), run=1002-1004msec 01:04:09.462 WRITE: bw=35.4MiB/s (37.1MB/s), 6506KiB/s-13.9MiB/s (6662kB/s-14.6MB/s), io=35.5MiB (37.2MB), run=1002-1004msec 01:04:09.462 01:04:09.462 Disk stats (read/write): 01:04:09.462 nvme0n1: ios=3093/3072, merge=0/0, ticks=27271/21801, in_queue=49072, util=89.27% 01:04:09.462 nvme0n2: ios=1265/1536, merge=0/0, ticks=14663/11017, in_queue=25680, util=89.90% 01:04:09.462 nvme0n3: ios=1617/2048, merge=0/0, ticks=14352/10929, in_queue=25281, util=89.73% 01:04:09.462 nvme0n4: ios=1216/1536, merge=0/0, ticks=13625/9115, in_queue=22740, util=87.51% 01:04:09.462 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 01:04:09.462 [global] 01:04:09.462 thread=1 01:04:09.462 invalidate=1 01:04:09.462 rw=randwrite 01:04:09.462 time_based=1 01:04:09.462 runtime=1 01:04:09.462 ioengine=libaio 01:04:09.462 direct=1 01:04:09.462 bs=4096 01:04:09.462 iodepth=128 01:04:09.462 norandommap=0 01:04:09.462 numjobs=1 01:04:09.462 01:04:09.462 verify_dump=1 01:04:09.462 verify_backlog=512 01:04:09.462 verify_state_save=0 01:04:09.462 do_verify=1 01:04:09.462 verify=crc32c-intel 01:04:09.462 [job0] 01:04:09.462 filename=/dev/nvme0n1 01:04:09.462 [job1] 01:04:09.462 filename=/dev/nvme0n2 01:04:09.462 [job2] 01:04:09.462 filename=/dev/nvme0n3 01:04:09.462 [job3] 01:04:09.462 filename=/dev/nvme0n4 01:04:09.462 Could not set queue depth (nvme0n1) 01:04:09.462 Could not set queue depth (nvme0n2) 01:04:09.462 Could not set queue depth (nvme0n3) 01:04:09.462 Could not set queue depth (nvme0n4) 01:04:09.720 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:04:09.720 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:04:09.720 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:04:09.720 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:04:09.720 fio-3.35 01:04:09.720 Starting 4 threads 01:04:11.098 01:04:11.098 job0: (groupid=0, jobs=1): err= 0: pid=66376: Mon Dec 9 06:03:05 2024 01:04:11.098 read: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec) 01:04:11.098 slat (usec): min=9, max=22734, avg=375.57, stdev=1711.84 01:04:11.098 clat (msec): min=29, max=102, avg=47.17, stdev=16.30 01:04:11.098 lat (msec): min=31, max=102, avg=47.54, stdev=16.47 01:04:11.098 clat percentiles (msec): 01:04:11.098 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 01:04:11.098 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 41], 01:04:11.098 | 70.00th=[ 49], 80.00th=[ 55], 90.00th=[ 77], 95.00th=[ 89], 01:04:11.098 | 99.00th=[ 93], 99.50th=[ 95], 99.90th=[ 102], 99.95th=[ 103], 01:04:11.098 | 99.99th=[ 103] 01:04:11.098 write: IOPS=1160, BW=4643KiB/s (4754kB/s)(4680KiB/1008msec); 0 zone resets 01:04:11.098 slat (usec): min=9, max=14912, avg=519.17, stdev=2061.89 01:04:11.098 clat (usec): min=1948, max=122764, avg=67213.64, stdev=28031.22 01:04:11.098 lat (msec): min=9, max=122, avg=67.73, stdev=28.14 01:04:11.098 clat percentiles (msec): 01:04:11.098 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 42], 01:04:11.098 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 69], 60.00th=[ 80], 01:04:11.098 | 70.00th=[ 87], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 117], 01:04:11.098 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 124], 99.95th=[ 124], 01:04:11.098 | 99.99th=[ 124] 01:04:11.098 bw ( KiB/s): min= 3072, max= 5264, per=11.15%, avg=4168.00, stdev=1549.98, samples=2 01:04:11.098 iops : min= 768, max= 1316, avg=1042.00, stdev=387.49, samples=2 01:04:11.098 lat (msec) : 2=0.05%, 10=0.36%, 50=58.61%, 100=34.00%, 250=6.97% 01:04:11.098 cpu : usr=1.59%, sys=4.47%, ctx=138, majf=0, minf=13 01:04:11.098 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 01:04:11.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:11.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:11.098 issued rwts: total=1024,1170,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:11.098 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:11.098 job1: (groupid=0, jobs=1): err= 0: pid=66377: Mon Dec 9 06:03:05 2024 01:04:11.098 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 01:04:11.098 slat (usec): min=10, max=5913, avg=180.25, stdev=889.18 01:04:11.098 clat (usec): min=17914, max=25732, avg=24019.57, stdev=1028.90 01:04:11.098 lat (usec): min=22561, max=25764, avg=24199.82, stdev=515.53 01:04:11.098 clat percentiles (usec): 01:04:11.098 | 1.00th=[18744], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 01:04:11.098 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 01:04:11.098 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 01:04:11.098 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 01:04:11.098 | 99.99th=[25822] 01:04:11.098 write: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1003msec); 0 zone resets 01:04:11.098 slat (usec): min=23, max=6297, avg=178.19, stdev=808.60 01:04:11.098 clat (usec): min=630, max=26102, avg=22698.34, stdev=2547.88 01:04:11.098 lat (usec): min=5184, max=26134, avg=22876.54, stdev=2419.23 01:04:11.098 clat percentiles (usec): 01:04:11.098 | 1.00th=[ 6259], 5.00th=[18744], 10.00th=[21627], 20.00th=[22152], 01:04:11.098 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 01:04:11.098 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 01:04:11.098 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 01:04:11.098 | 99.99th=[26084] 01:04:11.098 bw ( KiB/s): min= 9480, max=12312, per=29.14%, avg=10896.00, stdev=2002.53, samples=2 01:04:11.098 iops : min= 2370, max= 3078, avg=2724.00, stdev=500.63, samples=2 01:04:11.098 lat (usec) : 750=0.02% 01:04:11.098 lat (msec) : 10=0.59%, 20=4.09%, 50=95.30% 01:04:11.098 cpu : usr=3.49%, sys=10.88%, ctx=207, majf=0, minf=11 01:04:11.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 01:04:11.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:11.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:11.098 issued rwts: total=2560,2849,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:11.098 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:11.098 job2: (groupid=0, jobs=1): err= 0: pid=66378: Mon Dec 9 06:03:05 2024 01:04:11.098 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 01:04:11.098 slat (usec): min=16, max=12636, avg=248.13, stdev=1230.28 01:04:11.098 clat (usec): min=19218, max=56104, avg=31838.31, stdev=5966.69 01:04:11.098 lat (usec): min=19250, max=56149, avg=32086.43, stdev=6024.04 01:04:11.098 clat percentiles (usec): 01:04:11.098 | 1.00th=[19530], 5.00th=[23725], 10.00th=[24249], 20.00th=[26608], 01:04:11.098 | 30.00th=[27395], 40.00th=[28443], 50.00th=[30802], 60.00th=[33817], 01:04:11.098 | 70.00th=[35914], 80.00th=[36963], 90.00th=[39060], 95.00th=[40109], 01:04:11.098 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 01:04:11.098 | 99.99th=[56361] 01:04:11.098 write: IOPS=2507, BW=9.80MiB/s (10.3MB/s)(9.86MiB/1006msec); 0 zone resets 01:04:11.098 slat (usec): min=24, max=17486, avg=185.79, stdev=1207.50 01:04:11.098 clat (usec): min=450, max=53735, avg=24193.81, stdev=6087.48 01:04:11.098 lat (usec): min=5625, max=53774, avg=24379.59, stdev=6206.23 01:04:11.098 clat percentiles (usec): 01:04:11.098 | 1.00th=[ 6587], 5.00th=[18482], 10.00th=[19530], 20.00th=[20579], 01:04:11.098 | 30.00th=[20841], 40.00th=[22152], 50.00th=[22938], 60.00th=[23725], 01:04:11.098 | 70.00th=[24249], 80.00th=[30016], 90.00th=[32113], 95.00th=[34866], 01:04:11.098 | 99.00th=[43254], 99.50th=[43254], 99.90th=[46400], 99.95th=[49021], 01:04:11.098 | 99.99th=[53740] 01:04:11.098 bw ( KiB/s): min= 8208, max=10968, per=25.64%, avg=9588.00, stdev=1951.61, samples=2 01:04:11.098 iops : min= 2052, max= 2742, avg=2397.00, stdev=487.90, samples=2 01:04:11.098 lat (usec) : 500=0.02% 01:04:11.098 lat (msec) : 10=1.38%, 20=5.86%, 50=92.69%, 100=0.04% 01:04:11.098 cpu : usr=3.38%, sys=8.96%, ctx=128, majf=0, minf=7 01:04:11.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 01:04:11.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:11.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:11.098 issued rwts: total=2048,2523,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:11.098 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:11.098 job3: (groupid=0, jobs=1): err= 0: pid=66379: Mon Dec 9 06:03:05 2024 01:04:11.098 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 01:04:11.098 slat (usec): min=9, max=7715, avg=183.36, stdev=905.92 01:04:11.098 clat (usec): min=16597, max=27169, avg=23754.08, stdev=1353.51 01:04:11.098 lat (usec): min=21180, max=27190, avg=23937.44, stdev=1045.09 01:04:11.098 clat percentiles (usec): 01:04:11.098 | 1.00th=[18744], 5.00th=[21365], 10.00th=[21890], 20.00th=[23200], 01:04:11.098 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 01:04:11.098 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 01:04:11.098 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 01:04:11.098 | 99.99th=[27132] 01:04:11.098 write: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1003msec); 0 zone resets 01:04:11.098 slat (usec): min=20, max=5472, avg=173.14, stdev=776.88 01:04:11.098 clat (usec): min=554, max=26759, avg=22680.46, stdev=2731.60 01:04:11.098 lat (usec): min=5165, max=26794, avg=22853.60, stdev=2611.53 01:04:11.098 clat percentiles (usec): 01:04:11.098 | 1.00th=[ 6259], 5.00th=[18744], 10.00th=[20841], 20.00th=[21890], 01:04:11.098 | 30.00th=[22414], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 01:04:11.098 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[25560], 01:04:11.098 | 99.00th=[26346], 99.50th=[26608], 99.90th=[26608], 99.95th=[26870], 01:04:11.098 | 99.99th=[26870] 01:04:11.098 bw ( KiB/s): min= 9736, max=12312, per=29.48%, avg=11024.00, stdev=1821.51, samples=2 01:04:11.098 iops : min= 2434, max= 3078, avg=2756.00, stdev=455.38, samples=2 01:04:11.098 lat (usec) : 750=0.02% 01:04:11.098 lat (msec) : 10=0.59%, 20=5.15%, 50=94.25% 01:04:11.098 cpu : usr=2.89%, sys=12.18%, ctx=171, majf=0, minf=11 01:04:11.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 01:04:11.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:11.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:11.098 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:11.098 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:11.098 01:04:11.098 Run status group 0 (all jobs): 01:04:11.098 READ: bw=31.7MiB/s (33.3MB/s), 4063KiB/s-9.97MiB/s (4161kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1003-1008msec 01:04:11.098 WRITE: bw=36.5MiB/s (38.3MB/s), 4643KiB/s-11.2MiB/s (4754kB/s-11.8MB/s), io=36.8MiB (38.6MB), run=1003-1008msec 01:04:11.098 01:04:11.098 Disk stats (read/write): 01:04:11.098 nvme0n1: ios=939/1024, merge=0/0, ticks=14073/21203, in_queue=35276, util=89.07% 01:04:11.098 nvme0n2: ios=2193/2560, merge=0/0, ticks=11775/13048, in_queue=24823, util=90.10% 01:04:11.098 nvme0n3: ios=1824/2048, merge=0/0, ticks=28135/22157, in_queue=50292, util=90.25% 01:04:11.098 nvme0n4: ios=2161/2560, merge=0/0, ticks=11931/12780, in_queue=24711, util=90.09% 01:04:11.098 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 01:04:11.098 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66393 01:04:11.098 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 01:04:11.098 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 01:04:11.099 [global] 01:04:11.099 thread=1 01:04:11.099 invalidate=1 01:04:11.099 rw=read 01:04:11.099 time_based=1 01:04:11.099 runtime=10 01:04:11.099 ioengine=libaio 01:04:11.099 direct=1 01:04:11.099 bs=4096 01:04:11.099 iodepth=1 01:04:11.099 norandommap=1 01:04:11.099 numjobs=1 01:04:11.099 01:04:11.099 [job0] 01:04:11.099 filename=/dev/nvme0n1 01:04:11.099 [job1] 01:04:11.099 filename=/dev/nvme0n2 01:04:11.099 [job2] 01:04:11.099 filename=/dev/nvme0n3 01:04:11.099 [job3] 01:04:11.099 filename=/dev/nvme0n4 01:04:11.099 Could not set queue depth (nvme0n1) 01:04:11.099 Could not set queue depth (nvme0n2) 01:04:11.099 Could not set queue depth (nvme0n3) 01:04:11.099 Could not set queue depth (nvme0n4) 01:04:11.099 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:11.099 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:11.099 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:11.099 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:04:11.099 fio-3.35 01:04:11.099 Starting 4 threads 01:04:14.385 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 01:04:14.385 fio: pid=66436, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:04:14.385 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35401728, buflen=4096 01:04:14.385 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 01:04:14.385 fio: pid=66435, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:04:14.385 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=41902080, buflen=4096 01:04:14.385 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:04:14.385 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 01:04:14.385 fio: pid=66433, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:04:14.385 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41287680, buflen=4096 01:04:14.385 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:04:14.385 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 01:04:14.643 fio: pid=66434, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:04:14.643 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50720768, buflen=4096 01:04:14.643 01:04:14.643 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66433: Mon Dec 9 06:03:09 2024 01:04:14.643 read: IOPS=3128, BW=12.2MiB/s (12.8MB/s)(39.4MiB/3222msec) 01:04:14.643 slat (usec): min=5, max=9692, avg=15.70, stdev=178.25 01:04:14.643 clat (usec): min=84, max=4249, avg=302.75, stdev=80.04 01:04:14.643 lat (usec): min=93, max=9974, avg=318.45, stdev=195.69 01:04:14.643 clat percentiles (usec): 01:04:14.643 | 1.00th=[ 124], 5.00th=[ 227], 10.00th=[ 255], 20.00th=[ 273], 01:04:14.643 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 01:04:14.643 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 367], 95.00th=[ 400], 01:04:14.643 | 99.00th=[ 449], 99.50th=[ 474], 99.90th=[ 635], 99.95th=[ 979], 01:04:14.643 | 99.99th=[ 4178] 01:04:14.643 bw ( KiB/s): min=11968, max=13037, per=25.97%, avg=12451.50, stdev=414.77, samples=6 01:04:14.643 iops : min= 2992, max= 3259, avg=3112.83, stdev=103.62, samples=6 01:04:14.643 lat (usec) : 100=0.20%, 250=8.10%, 500=91.45%, 750=0.15%, 1000=0.04% 01:04:14.643 lat (msec) : 2=0.03%, 10=0.02% 01:04:14.643 cpu : usr=1.02%, sys=3.79%, ctx=10087, majf=0, minf=1 01:04:14.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:14.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:14.643 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:14.643 issued rwts: total=10081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:14.643 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:14.643 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66434: Mon Dec 9 06:03:09 2024 01:04:14.643 read: IOPS=3590, BW=14.0MiB/s (14.7MB/s)(48.4MiB/3449msec) 01:04:14.643 slat (usec): min=7, max=10987, avg=14.97, stdev=188.74 01:04:14.643 clat (usec): min=101, max=3874, avg=262.50, stdev=90.97 01:04:14.643 lat (usec): min=108, max=11250, avg=277.48, stdev=208.41 01:04:14.643 clat percentiles (usec): 01:04:14.643 | 1.00th=[ 116], 5.00th=[ 135], 10.00th=[ 157], 20.00th=[ 215], 01:04:14.643 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 01:04:14.643 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 01:04:14.643 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 832], 99.95th=[ 1663], 01:04:14.643 | 99.99th=[ 3851] 01:04:14.643 bw ( KiB/s): min=13336, max=14022, per=28.37%, avg=13602.33, stdev=245.25, samples=6 01:04:14.643 iops : min= 3334, max= 3505, avg=3400.50, stdev=61.14, samples=6 01:04:14.643 lat (usec) : 250=28.79%, 500=71.05%, 750=0.03%, 1000=0.04% 01:04:14.643 lat (msec) : 2=0.04%, 4=0.04% 01:04:14.643 cpu : usr=0.90%, sys=3.54%, ctx=12394, majf=0, minf=1 01:04:14.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:14.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:14.643 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:14.643 issued rwts: total=12384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:14.643 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:14.643 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66435: Mon Dec 9 06:03:09 2024 01:04:14.643 read: IOPS=3368, BW=13.2MiB/s (13.8MB/s)(40.0MiB/3037msec) 01:04:14.643 slat (usec): min=7, max=9833, avg=10.57, stdev=119.70 01:04:14.643 clat (usec): min=117, max=1671, avg=285.25, stdev=46.51 01:04:14.643 lat (usec): min=125, max=10095, avg=295.81, stdev=127.68 01:04:14.643 clat percentiles (usec): 01:04:14.643 | 1.00th=[ 174], 5.00th=[ 212], 10.00th=[ 245], 20.00th=[ 265], 01:04:14.643 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 01:04:14.643 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 334], 01:04:14.643 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 717], 99.95th=[ 1037], 01:04:14.643 | 99.99th=[ 1582] 01:04:14.643 bw ( KiB/s): min=13296, max=13640, per=28.20%, avg=13520.00, stdev=139.60, samples=5 01:04:14.643 iops : min= 3324, max= 3410, avg=3380.00, stdev=34.90, samples=5 01:04:14.643 lat (usec) : 250=11.77%, 500=88.06%, 750=0.08%, 1000=0.03% 01:04:14.644 lat (msec) : 2=0.06% 01:04:14.644 cpu : usr=0.49%, sys=2.87%, ctx=10233, majf=0, minf=2 01:04:14.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:14.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:14.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:14.644 issued rwts: total=10231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:14.644 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:14.644 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66436: Mon Dec 9 06:03:09 2024 01:04:14.644 read: IOPS=3057, BW=11.9MiB/s (12.5MB/s)(33.8MiB/2827msec) 01:04:14.644 slat (usec): min=5, max=1538, avg=13.22, stdev=16.87 01:04:14.644 clat (usec): min=169, max=3825, avg=312.30, stdev=73.13 01:04:14.644 lat (usec): min=181, max=3849, avg=325.53, stdev=75.83 01:04:14.644 clat percentiles (usec): 01:04:14.644 | 1.00th=[ 247], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 01:04:14.644 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 01:04:14.644 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 375], 95.00th=[ 404], 01:04:14.644 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 586], 99.95th=[ 1020], 01:04:14.644 | 99.99th=[ 3818] 01:04:14.644 bw ( KiB/s): min=11968, max=12704, per=25.62%, avg=12280.00, stdev=297.56, samples=5 01:04:14.644 iops : min= 2992, max= 3176, avg=3070.00, stdev=74.39, samples=5 01:04:14.644 lat (usec) : 250=1.45%, 500=98.36%, 750=0.10%, 1000=0.01% 01:04:14.644 lat (msec) : 2=0.03%, 4=0.03% 01:04:14.644 cpu : usr=1.27%, sys=3.89%, ctx=8645, majf=0, minf=2 01:04:14.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:04:14.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:14.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:14.644 issued rwts: total=8644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:14.644 latency : target=0, window=0, percentile=100.00%, depth=1 01:04:14.644 01:04:14.644 Run status group 0 (all jobs): 01:04:14.644 READ: bw=46.8MiB/s (49.1MB/s), 11.9MiB/s-14.0MiB/s (12.5MB/s-14.7MB/s), io=161MiB (169MB), run=2827-3449msec 01:04:14.644 01:04:14.644 Disk stats (read/write): 01:04:14.644 nvme0n1: ios=9732/0, merge=0/0, ticks=2858/0, in_queue=2858, util=95.20% 01:04:14.644 nvme0n2: ios=11870/0, merge=0/0, ticks=3179/0, in_queue=3179, util=95.19% 01:04:14.644 nvme0n3: ios=9758/0, merge=0/0, ticks=2783/0, in_queue=2783, util=96.34% 01:04:14.644 nvme0n4: ios=8035/0, merge=0/0, ticks=2398/0, in_queue=2398, util=96.33% 01:04:14.644 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:04:14.644 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 01:04:14.902 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:04:14.902 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 01:04:15.160 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:04:15.160 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 01:04:15.419 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:04:15.419 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 01:04:15.677 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:04:15.677 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66393 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:04:15.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:04:15.935 nvmf hotplug test: fio failed as expected 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 01:04:15.935 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:04:16.194 rmmod nvme_tcp 01:04:16.194 rmmod nvme_fabrics 01:04:16.194 rmmod nvme_keyring 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66016 ']' 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66016 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66016 ']' 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66016 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66016 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:04:16.194 killing process with pid 66016 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66016' 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66016 01:04:16.194 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66016 01:04:16.481 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:04:16.481 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:04:16.481 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:04:16.481 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:04:16.482 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:04:16.482 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:04:16.482 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:04:16.482 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 01:04:16.742 01:04:16.742 real 0m18.967s 01:04:16.742 user 1m10.564s 01:04:16.742 sys 0m8.445s 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:04:16.742 ************************************ 01:04:16.742 END TEST nvmf_fio_target 01:04:16.742 ************************************ 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:04:16.742 ************************************ 01:04:16.742 START TEST nvmf_bdevio 01:04:16.742 ************************************ 01:04:16.742 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:04:17.002 * Looking for test storage... 01:04:17.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:17.002 --rc genhtml_branch_coverage=1 01:04:17.002 --rc genhtml_function_coverage=1 01:04:17.002 --rc genhtml_legend=1 01:04:17.002 --rc geninfo_all_blocks=1 01:04:17.002 --rc geninfo_unexecuted_blocks=1 01:04:17.002 01:04:17.002 ' 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:17.002 --rc genhtml_branch_coverage=1 01:04:17.002 --rc genhtml_function_coverage=1 01:04:17.002 --rc genhtml_legend=1 01:04:17.002 --rc geninfo_all_blocks=1 01:04:17.002 --rc geninfo_unexecuted_blocks=1 01:04:17.002 01:04:17.002 ' 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:17.002 --rc genhtml_branch_coverage=1 01:04:17.002 --rc genhtml_function_coverage=1 01:04:17.002 --rc genhtml_legend=1 01:04:17.002 --rc geninfo_all_blocks=1 01:04:17.002 --rc geninfo_unexecuted_blocks=1 01:04:17.002 01:04:17.002 ' 01:04:17.002 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:17.002 --rc genhtml_branch_coverage=1 01:04:17.003 --rc genhtml_function_coverage=1 01:04:17.003 --rc genhtml_legend=1 01:04:17.003 --rc geninfo_all_blocks=1 01:04:17.003 --rc geninfo_unexecuted_blocks=1 01:04:17.003 01:04:17.003 ' 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:04:17.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:04:17.003 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:04:17.004 Cannot find device "nvmf_init_br" 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:04:17.004 Cannot find device "nvmf_init_br2" 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:04:17.004 Cannot find device "nvmf_tgt_br" 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 01:04:17.004 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:04:17.004 Cannot find device "nvmf_tgt_br2" 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:04:17.263 Cannot find device "nvmf_init_br" 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:04:17.263 Cannot find device "nvmf_init_br2" 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:04:17.263 Cannot find device "nvmf_tgt_br" 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:04:17.263 Cannot find device "nvmf_tgt_br2" 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:04:17.263 Cannot find device "nvmf_br" 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:04:17.263 Cannot find device "nvmf_init_if" 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:04:17.263 Cannot find device "nvmf_init_if2" 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:17.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 01:04:17.263 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:17.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:04:17.264 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:04:17.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:04:17.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 01:04:17.522 01:04:17.522 --- 10.0.0.3 ping statistics --- 01:04:17.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:17.522 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:04:17.522 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:04:17.522 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 01:04:17.522 01:04:17.522 --- 10.0.0.4 ping statistics --- 01:04:17.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:17.522 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:04:17.522 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:04:17.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:17.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 01:04:17.522 01:04:17.522 --- 10.0.0.1 ping statistics --- 01:04:17.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:17.523 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:04:17.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:17.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 01:04:17.523 01:04:17.523 --- 10.0.0.2 ping statistics --- 01:04:17.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:17.523 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:04:17.523 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66755 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66755 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66755 ']' 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:17.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:17.523 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:17.523 [2024-12-09 06:03:12.083020] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:04:17.523 [2024-12-09 06:03:12.083078] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:17.781 [2024-12-09 06:03:12.237660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:04:17.781 [2024-12-09 06:03:12.278460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:17.781 [2024-12-09 06:03:12.278499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:17.781 [2024-12-09 06:03:12.278508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:17.781 [2024-12-09 06:03:12.278516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:17.781 [2024-12-09 06:03:12.278522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:17.781 [2024-12-09 06:03:12.279458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:04:17.781 [2024-12-09 06:03:12.279775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:04:17.781 [2024-12-09 06:03:12.279908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:04:17.781 [2024-12-09 06:03:12.279963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:04:17.781 [2024-12-09 06:03:12.322531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:04:18.715 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:18.715 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 01:04:18.715 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:18.715 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:18.715 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:18.715 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:18.715 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:04:18.715 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:18.715 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:18.715 [2024-12-09 06:03:12.997215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:18.715 Malloc0 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:18.715 [2024-12-09 06:03:13.067073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:04:18.715 { 01:04:18.715 "params": { 01:04:18.715 "name": "Nvme$subsystem", 01:04:18.715 "trtype": "$TEST_TRANSPORT", 01:04:18.715 "traddr": "$NVMF_FIRST_TARGET_IP", 01:04:18.715 "adrfam": "ipv4", 01:04:18.715 "trsvcid": "$NVMF_PORT", 01:04:18.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:04:18.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:04:18.715 "hdgst": ${hdgst:-false}, 01:04:18.715 "ddgst": ${ddgst:-false} 01:04:18.715 }, 01:04:18.715 "method": "bdev_nvme_attach_controller" 01:04:18.715 } 01:04:18.715 EOF 01:04:18.715 )") 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 01:04:18.715 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:04:18.715 "params": { 01:04:18.715 "name": "Nvme1", 01:04:18.715 "trtype": "tcp", 01:04:18.715 "traddr": "10.0.0.3", 01:04:18.715 "adrfam": "ipv4", 01:04:18.715 "trsvcid": "4420", 01:04:18.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:04:18.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:04:18.715 "hdgst": false, 01:04:18.715 "ddgst": false 01:04:18.715 }, 01:04:18.715 "method": "bdev_nvme_attach_controller" 01:04:18.715 }' 01:04:18.715 [2024-12-09 06:03:13.120821] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:04:18.716 [2024-12-09 06:03:13.120879] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66791 ] 01:04:18.716 [2024-12-09 06:03:13.261572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:04:18.976 [2024-12-09 06:03:13.305400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:04:18.976 [2024-12-09 06:03:13.305584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:04:18.976 [2024-12-09 06:03:13.305584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:04:18.976 [2024-12-09 06:03:13.355236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:04:18.976 I/O targets: 01:04:18.976 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:04:18.976 01:04:18.976 01:04:18.976 CUnit - A unit testing framework for C - Version 2.1-3 01:04:18.976 http://cunit.sourceforge.net/ 01:04:18.976 01:04:18.976 01:04:18.976 Suite: bdevio tests on: Nvme1n1 01:04:18.976 Test: blockdev write read block ...passed 01:04:18.976 Test: blockdev write zeroes read block ...passed 01:04:18.976 Test: blockdev write zeroes read no split ...passed 01:04:18.976 Test: blockdev write zeroes read split ...passed 01:04:18.976 Test: blockdev write zeroes read split partial ...passed 01:04:18.976 Test: blockdev reset ...[2024-12-09 06:03:13.493758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:04:18.976 [2024-12-09 06:03:13.493839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa3b80 (9): Bad file descriptor 01:04:18.976 passed 01:04:18.976 Test: blockdev write read 8 blocks ...[2024-12-09 06:03:13.512788] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:04:18.976 passed 01:04:18.976 Test: blockdev write read size > 128k ...passed 01:04:18.976 Test: blockdev write read invalid size ...passed 01:04:18.976 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:04:18.976 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:04:18.976 Test: blockdev write read max offset ...passed 01:04:18.976 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:04:18.976 Test: blockdev writev readv 8 blocks ...passed 01:04:18.976 Test: blockdev writev readv 30 x 1block ...passed 01:04:18.976 Test: blockdev writev readv block ...passed 01:04:18.976 Test: blockdev writev readv size > 128k ...passed 01:04:18.976 Test: blockdev writev readv size > 128k in two iovs ...passed 01:04:18.976 Test: blockdev comparev and writev ...[2024-12-09 06:03:13.520685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:04:18.976 [2024-12-09 06:03:13.520880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:04:18.976 [2024-12-09 06:03:13.520904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:04:18.976 [2024-12-09 06:03:13.520915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:04:18.976 [2024-12-09 06:03:13.521245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:04:18.976 [2024-12-09 06:03:13.521260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:04:18.976 [2024-12-09 06:03:13.521274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:04:18.976 [2024-12-09 06:03:13.521283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:04:18.976 [2024-12-09 06:03:13.521656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:04:18.976 [2024-12-09 06:03:13.521668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:04:18.976 [2024-12-09 06:03:13.521683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:04:18.976 [2024-12-09 06:03:13.521692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:04:18.976 [2024-12-09 06:03:13.522016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:04:18.976 [2024-12-09 06:03:13.522028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:04:18.976 [2024-12-09 06:03:13.522042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:04:18.976 [2024-12-09 06:03:13.522051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:04:18.976 passed 01:04:18.976 Test: blockdev nvme passthru rw ...passed 01:04:18.976 Test: blockdev nvme passthru vendor specific ...[2024-12-09 06:03:13.523062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:04:18.976 [2024-12-09 06:03:13.523100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:04:18.976 [2024-12-09 06:03:13.523191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:04:18.976 [2024-12-09 06:03:13.523204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:04:18.976 [2024-12-09 06:03:13.523302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:04:18.976 [2024-12-09 06:03:13.523319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:04:18.976 passed 01:04:18.976 Test: blockdev nvme admin passthru ...[2024-12-09 06:03:13.523401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:04:18.976 [2024-12-09 06:03:13.523420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:04:18.976 passed 01:04:18.976 Test: blockdev copy ...passed 01:04:18.976 01:04:18.976 Run Summary: Type Total Ran Passed Failed Inactive 01:04:18.976 suites 1 1 n/a 0 0 01:04:18.976 tests 23 23 23 0 0 01:04:18.976 asserts 152 152 152 0 n/a 01:04:18.976 01:04:18.976 Elapsed time = 0.144 seconds 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:04:19.236 rmmod nvme_tcp 01:04:19.236 rmmod nvme_fabrics 01:04:19.236 rmmod nvme_keyring 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66755 ']' 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66755 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66755 ']' 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66755 01:04:19.236 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 01:04:19.496 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:19.496 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66755 01:04:19.496 killing process with pid 66755 01:04:19.496 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:04:19.496 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:04:19.496 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66755' 01:04:19.496 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66755 01:04:19.496 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66755 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:04:19.496 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:19.756 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:20.016 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 01:04:20.016 01:04:20.016 real 0m3.141s 01:04:20.016 user 0m8.329s 01:04:20.016 sys 0m1.075s 01:04:20.016 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:20.016 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:04:20.016 ************************************ 01:04:20.016 END TEST nvmf_bdevio 01:04:20.016 ************************************ 01:04:20.016 06:03:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:04:20.016 01:04:20.016 ************************************ 01:04:20.016 END TEST nvmf_target_core 01:04:20.016 ************************************ 01:04:20.016 real 2m34.544s 01:04:20.016 user 6m32.083s 01:04:20.016 sys 0m56.900s 01:04:20.017 06:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:20.017 06:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:04:20.017 06:03:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 01:04:20.017 06:03:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:04:20.017 06:03:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:20.017 06:03:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:04:20.017 ************************************ 01:04:20.017 START TEST nvmf_target_extra 01:04:20.017 ************************************ 01:04:20.017 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 01:04:20.277 * Looking for test storage... 01:04:20.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:20.277 --rc genhtml_branch_coverage=1 01:04:20.277 --rc genhtml_function_coverage=1 01:04:20.277 --rc genhtml_legend=1 01:04:20.277 --rc geninfo_all_blocks=1 01:04:20.277 --rc geninfo_unexecuted_blocks=1 01:04:20.277 01:04:20.277 ' 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:20.277 --rc genhtml_branch_coverage=1 01:04:20.277 --rc genhtml_function_coverage=1 01:04:20.277 --rc genhtml_legend=1 01:04:20.277 --rc geninfo_all_blocks=1 01:04:20.277 --rc geninfo_unexecuted_blocks=1 01:04:20.277 01:04:20.277 ' 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:20.277 --rc genhtml_branch_coverage=1 01:04:20.277 --rc genhtml_function_coverage=1 01:04:20.277 --rc genhtml_legend=1 01:04:20.277 --rc geninfo_all_blocks=1 01:04:20.277 --rc geninfo_unexecuted_blocks=1 01:04:20.277 01:04:20.277 ' 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:20.277 --rc genhtml_branch_coverage=1 01:04:20.277 --rc genhtml_function_coverage=1 01:04:20.277 --rc genhtml_legend=1 01:04:20.277 --rc geninfo_all_blocks=1 01:04:20.277 --rc geninfo_unexecuted_blocks=1 01:04:20.277 01:04:20.277 ' 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:20.277 06:03:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:04:20.278 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:04:20.278 ************************************ 01:04:20.278 START TEST nvmf_auth_target 01:04:20.278 ************************************ 01:04:20.278 06:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:04:20.538 * Looking for test storage... 01:04:20.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:04:20.538 06:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:20.538 06:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 01:04:20.538 06:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 01:04:20.538 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:20.539 --rc genhtml_branch_coverage=1 01:04:20.539 --rc genhtml_function_coverage=1 01:04:20.539 --rc genhtml_legend=1 01:04:20.539 --rc geninfo_all_blocks=1 01:04:20.539 --rc geninfo_unexecuted_blocks=1 01:04:20.539 01:04:20.539 ' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:20.539 --rc genhtml_branch_coverage=1 01:04:20.539 --rc genhtml_function_coverage=1 01:04:20.539 --rc genhtml_legend=1 01:04:20.539 --rc geninfo_all_blocks=1 01:04:20.539 --rc geninfo_unexecuted_blocks=1 01:04:20.539 01:04:20.539 ' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:20.539 --rc genhtml_branch_coverage=1 01:04:20.539 --rc genhtml_function_coverage=1 01:04:20.539 --rc genhtml_legend=1 01:04:20.539 --rc geninfo_all_blocks=1 01:04:20.539 --rc geninfo_unexecuted_blocks=1 01:04:20.539 01:04:20.539 ' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:20.539 --rc genhtml_branch_coverage=1 01:04:20.539 --rc genhtml_function_coverage=1 01:04:20.539 --rc genhtml_legend=1 01:04:20.539 --rc geninfo_all_blocks=1 01:04:20.539 --rc geninfo_unexecuted_blocks=1 01:04:20.539 01:04:20.539 ' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:04:20.539 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:20.539 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:04:20.799 Cannot find device "nvmf_init_br" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:04:20.799 Cannot find device "nvmf_init_br2" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:04:20.799 Cannot find device "nvmf_tgt_br" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:04:20.799 Cannot find device "nvmf_tgt_br2" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:04:20.799 Cannot find device "nvmf_init_br" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:04:20.799 Cannot find device "nvmf_init_br2" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:04:20.799 Cannot find device "nvmf_tgt_br" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:04:20.799 Cannot find device "nvmf_tgt_br2" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:04:20.799 Cannot find device "nvmf_br" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:04:20.799 Cannot find device "nvmf_init_if" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:04:20.799 Cannot find device "nvmf_init_if2" 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:20.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:20.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:04:20.799 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:04:21.059 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:04:21.318 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:04:21.318 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 01:04:21.318 01:04:21.318 --- 10.0.0.3 ping statistics --- 01:04:21.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:21.318 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:04:21.318 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:04:21.318 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 01:04:21.318 01:04:21.318 --- 10.0.0.4 ping statistics --- 01:04:21.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:21.318 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:04:21.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:21.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 01:04:21.318 01:04:21.318 --- 10.0.0.1 ping statistics --- 01:04:21.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:21.318 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:04:21.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:21.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 01:04:21.318 01:04:21.318 --- 10.0.0.2 ping statistics --- 01:04:21.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:21.318 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:21.318 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67078 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67078 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67078 ']' 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:21.319 06:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67110 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=951634833e965a82f1ee86183f757c15904bed2946045de6 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DGa 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 951634833e965a82f1ee86183f757c15904bed2946045de6 0 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 951634833e965a82f1ee86183f757c15904bed2946045de6 0 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=951634833e965a82f1ee86183f757c15904bed2946045de6 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DGa 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DGa 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.DGa 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc3cfbf955b64c2932721eade633df7aab97ae5d776f992f29fb0258c3c24729 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vPz 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc3cfbf955b64c2932721eade633df7aab97ae5d776f992f29fb0258c3c24729 3 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc3cfbf955b64c2932721eade633df7aab97ae5d776f992f29fb0258c3c24729 3 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc3cfbf955b64c2932721eade633df7aab97ae5d776f992f29fb0258c3c24729 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 01:04:22.256 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vPz 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vPz 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.vPz 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9626a487e6c0df8bc0d03abcc9031621 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5Rt 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9626a487e6c0df8bc0d03abcc9031621 1 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9626a487e6c0df8bc0d03abcc9031621 1 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9626a487e6c0df8bc0d03abcc9031621 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5Rt 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5Rt 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.5Rt 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=84bb02dc58698a193937fad0da8e3253c52a36383cde8b05 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Cqv 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 84bb02dc58698a193937fad0da8e3253c52a36383cde8b05 2 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 84bb02dc58698a193937fad0da8e3253c52a36383cde8b05 2 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:04:22.516 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:04:22.517 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=84bb02dc58698a193937fad0da8e3253c52a36383cde8b05 01:04:22.517 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 01:04:22.517 06:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Cqv 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Cqv 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Cqv 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fb42173f55cc7831ebd97240e973f991d896ca50816b0b29 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wNf 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fb42173f55cc7831ebd97240e973f991d896ca50816b0b29 2 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fb42173f55cc7831ebd97240e973f991d896ca50816b0b29 2 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fb42173f55cc7831ebd97240e973f991d896ca50816b0b29 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wNf 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wNf 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.wNf 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 01:04:22.517 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=55ce970fb303b3c34592e79579ccede5 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Ds2 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 55ce970fb303b3c34592e79579ccede5 1 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 55ce970fb303b3c34592e79579ccede5 1 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=55ce970fb303b3c34592e79579ccede5 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Ds2 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Ds2 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Ds2 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5a8c82eaff4deb788b5c48bc98a7a5db17457023ff57fa20edfeea2e80b81f11 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.A3x 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5a8c82eaff4deb788b5c48bc98a7a5db17457023ff57fa20edfeea2e80b81f11 3 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5a8c82eaff4deb788b5c48bc98a7a5db17457023ff57fa20edfeea2e80b81f11 3 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5a8c82eaff4deb788b5c48bc98a7a5db17457023ff57fa20edfeea2e80b81f11 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.A3x 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.A3x 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.A3x 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67078 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67078 ']' 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:22.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:22.777 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:23.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 01:04:23.036 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:23.036 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:04:23.036 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67110 /var/tmp/host.sock 01:04:23.036 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67110 ']' 01:04:23.036 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 01:04:23.036 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:23.036 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 01:04:23.036 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:23.036 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DGa 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DGa 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DGa 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.vPz ]] 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vPz 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vPz 01:04:23.296 06:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vPz 01:04:23.555 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:04:23.555 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5Rt 01:04:23.555 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:23.555 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:23.555 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:23.555 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5Rt 01:04:23.555 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5Rt 01:04:23.814 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Cqv ]] 01:04:23.814 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Cqv 01:04:23.814 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:23.814 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:23.814 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:23.814 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Cqv 01:04:23.814 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Cqv 01:04:24.074 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:04:24.074 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wNf 01:04:24.074 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:24.074 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:24.074 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:24.074 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wNf 01:04:24.074 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wNf 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Ds2 ]] 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ds2 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ds2 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ds2 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.A3x 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.A3x 01:04:24.333 06:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.A3x 01:04:24.593 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 01:04:24.593 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:04:24.593 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:04:24.593 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:24.593 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:04:24.593 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:24.853 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:25.112 01:04:25.112 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:25.112 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:25.112 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:25.372 { 01:04:25.372 "cntlid": 1, 01:04:25.372 "qid": 0, 01:04:25.372 "state": "enabled", 01:04:25.372 "thread": "nvmf_tgt_poll_group_000", 01:04:25.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:25.372 "listen_address": { 01:04:25.372 "trtype": "TCP", 01:04:25.372 "adrfam": "IPv4", 01:04:25.372 "traddr": "10.0.0.3", 01:04:25.372 "trsvcid": "4420" 01:04:25.372 }, 01:04:25.372 "peer_address": { 01:04:25.372 "trtype": "TCP", 01:04:25.372 "adrfam": "IPv4", 01:04:25.372 "traddr": "10.0.0.1", 01:04:25.372 "trsvcid": "33776" 01:04:25.372 }, 01:04:25.372 "auth": { 01:04:25.372 "state": "completed", 01:04:25.372 "digest": "sha256", 01:04:25.372 "dhgroup": "null" 01:04:25.372 } 01:04:25.372 } 01:04:25.372 ]' 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:25.372 06:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:25.632 06:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:25.632 06:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:28.934 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:28.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:28.934 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:28.934 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:28.934 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:28.934 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:28.934 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:28.934 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:04:28.934 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:29.192 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:29.451 01:04:29.451 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:29.451 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:29.451 06:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:29.710 { 01:04:29.710 "cntlid": 3, 01:04:29.710 "qid": 0, 01:04:29.710 "state": "enabled", 01:04:29.710 "thread": "nvmf_tgt_poll_group_000", 01:04:29.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:29.710 "listen_address": { 01:04:29.710 "trtype": "TCP", 01:04:29.710 "adrfam": "IPv4", 01:04:29.710 "traddr": "10.0.0.3", 01:04:29.710 "trsvcid": "4420" 01:04:29.710 }, 01:04:29.710 "peer_address": { 01:04:29.710 "trtype": "TCP", 01:04:29.710 "adrfam": "IPv4", 01:04:29.710 "traddr": "10.0.0.1", 01:04:29.710 "trsvcid": "33806" 01:04:29.710 }, 01:04:29.710 "auth": { 01:04:29.710 "state": "completed", 01:04:29.710 "digest": "sha256", 01:04:29.710 "dhgroup": "null" 01:04:29.710 } 01:04:29.710 } 01:04:29.710 ]' 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:29.710 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:29.968 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:29.968 06:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:30.537 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:30.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:30.537 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:30.537 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:30.537 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:30.537 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:30.537 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:30.537 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:04:30.537 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:30.795 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:31.054 01:04:31.054 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:31.054 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:31.054 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:31.312 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:31.312 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:31.312 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:31.313 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:31.313 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:31.313 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:31.313 { 01:04:31.313 "cntlid": 5, 01:04:31.313 "qid": 0, 01:04:31.313 "state": "enabled", 01:04:31.313 "thread": "nvmf_tgt_poll_group_000", 01:04:31.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:31.313 "listen_address": { 01:04:31.313 "trtype": "TCP", 01:04:31.313 "adrfam": "IPv4", 01:04:31.313 "traddr": "10.0.0.3", 01:04:31.313 "trsvcid": "4420" 01:04:31.313 }, 01:04:31.313 "peer_address": { 01:04:31.313 "trtype": "TCP", 01:04:31.313 "adrfam": "IPv4", 01:04:31.313 "traddr": "10.0.0.1", 01:04:31.313 "trsvcid": "33826" 01:04:31.313 }, 01:04:31.313 "auth": { 01:04:31.313 "state": "completed", 01:04:31.313 "digest": "sha256", 01:04:31.313 "dhgroup": "null" 01:04:31.313 } 01:04:31.313 } 01:04:31.313 ]' 01:04:31.313 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:31.313 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:31.313 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:31.313 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:04:31.313 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:31.571 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:31.571 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:31.571 06:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:31.571 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:31.571 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:32.138 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:32.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:32.138 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:32.138 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:32.138 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:32.138 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:32.138 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:32.138 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:04:32.138 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:04:32.397 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 01:04:32.397 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:32.397 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:32.397 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:04:32.397 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:04:32.398 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:32.398 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:04:32.398 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:32.398 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:32.398 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:32.398 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:04:32.398 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:32.398 06:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:32.656 01:04:32.656 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:32.656 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:32.656 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:32.916 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:32.916 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:32.916 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:32.916 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:32.916 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:32.916 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:32.916 { 01:04:32.916 "cntlid": 7, 01:04:32.916 "qid": 0, 01:04:32.916 "state": "enabled", 01:04:32.916 "thread": "nvmf_tgt_poll_group_000", 01:04:32.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:32.916 "listen_address": { 01:04:32.916 "trtype": "TCP", 01:04:32.916 "adrfam": "IPv4", 01:04:32.916 "traddr": "10.0.0.3", 01:04:32.916 "trsvcid": "4420" 01:04:32.916 }, 01:04:32.916 "peer_address": { 01:04:32.916 "trtype": "TCP", 01:04:32.916 "adrfam": "IPv4", 01:04:32.916 "traddr": "10.0.0.1", 01:04:32.916 "trsvcid": "36260" 01:04:32.916 }, 01:04:32.916 "auth": { 01:04:32.916 "state": "completed", 01:04:32.916 "digest": "sha256", 01:04:32.916 "dhgroup": "null" 01:04:32.916 } 01:04:32.917 } 01:04:32.917 ]' 01:04:32.917 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:32.917 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:32.917 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:32.917 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:04:32.917 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:33.176 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:33.176 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:33.176 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:33.176 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:04:33.176 06:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:04:33.773 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:33.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:33.773 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:33.773 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:33.773 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:33.773 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:33.773 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:04:33.773 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:33.773 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:04:33.773 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:04:34.032 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 01:04:34.032 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:34.032 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:34.032 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:04:34.033 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:04:34.033 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:34.033 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:34.033 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:34.033 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:34.033 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:34.033 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:34.033 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:34.033 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:34.292 01:04:34.292 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:34.292 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:34.292 06:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:34.552 { 01:04:34.552 "cntlid": 9, 01:04:34.552 "qid": 0, 01:04:34.552 "state": "enabled", 01:04:34.552 "thread": "nvmf_tgt_poll_group_000", 01:04:34.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:34.552 "listen_address": { 01:04:34.552 "trtype": "TCP", 01:04:34.552 "adrfam": "IPv4", 01:04:34.552 "traddr": "10.0.0.3", 01:04:34.552 "trsvcid": "4420" 01:04:34.552 }, 01:04:34.552 "peer_address": { 01:04:34.552 "trtype": "TCP", 01:04:34.552 "adrfam": "IPv4", 01:04:34.552 "traddr": "10.0.0.1", 01:04:34.552 "trsvcid": "36282" 01:04:34.552 }, 01:04:34.552 "auth": { 01:04:34.552 "state": "completed", 01:04:34.552 "digest": "sha256", 01:04:34.552 "dhgroup": "ffdhe2048" 01:04:34.552 } 01:04:34.552 } 01:04:34.552 ]' 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:04:34.552 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:34.812 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:34.812 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:34.812 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:34.812 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:34.812 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:35.380 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:35.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:35.380 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:35.380 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:35.380 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:35.640 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:35.640 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:35.640 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:04:35.640 06:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:35.640 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:35.899 01:04:35.899 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:35.899 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:35.899 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:36.158 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:36.158 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:36.158 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:36.158 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:36.158 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:36.158 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:36.158 { 01:04:36.158 "cntlid": 11, 01:04:36.158 "qid": 0, 01:04:36.158 "state": "enabled", 01:04:36.158 "thread": "nvmf_tgt_poll_group_000", 01:04:36.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:36.158 "listen_address": { 01:04:36.158 "trtype": "TCP", 01:04:36.158 "adrfam": "IPv4", 01:04:36.158 "traddr": "10.0.0.3", 01:04:36.158 "trsvcid": "4420" 01:04:36.158 }, 01:04:36.158 "peer_address": { 01:04:36.158 "trtype": "TCP", 01:04:36.158 "adrfam": "IPv4", 01:04:36.158 "traddr": "10.0.0.1", 01:04:36.158 "trsvcid": "36312" 01:04:36.158 }, 01:04:36.158 "auth": { 01:04:36.158 "state": "completed", 01:04:36.158 "digest": "sha256", 01:04:36.158 "dhgroup": "ffdhe2048" 01:04:36.158 } 01:04:36.158 } 01:04:36.158 ]' 01:04:36.158 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:36.158 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:36.158 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:36.418 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:04:36.418 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:36.418 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:36.418 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:36.418 06:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:36.677 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:36.677 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:37.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:37.246 06:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:37.506 01:04:37.506 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:37.506 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:37.506 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:37.765 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:37.765 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:37.765 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.765 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:37.765 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.765 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:37.765 { 01:04:37.765 "cntlid": 13, 01:04:37.765 "qid": 0, 01:04:37.765 "state": "enabled", 01:04:37.765 "thread": "nvmf_tgt_poll_group_000", 01:04:37.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:37.765 "listen_address": { 01:04:37.765 "trtype": "TCP", 01:04:37.765 "adrfam": "IPv4", 01:04:37.765 "traddr": "10.0.0.3", 01:04:37.765 "trsvcid": "4420" 01:04:37.765 }, 01:04:37.765 "peer_address": { 01:04:37.765 "trtype": "TCP", 01:04:37.765 "adrfam": "IPv4", 01:04:37.765 "traddr": "10.0.0.1", 01:04:37.765 "trsvcid": "36348" 01:04:37.765 }, 01:04:37.765 "auth": { 01:04:37.765 "state": "completed", 01:04:37.765 "digest": "sha256", 01:04:37.765 "dhgroup": "ffdhe2048" 01:04:37.765 } 01:04:37.765 } 01:04:37.765 ]' 01:04:37.765 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:38.024 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:38.024 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:38.024 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:04:38.024 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:38.024 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:38.024 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:38.024 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:38.283 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:38.283 06:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:38.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:38.850 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:39.108 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:39.367 { 01:04:39.367 "cntlid": 15, 01:04:39.367 "qid": 0, 01:04:39.367 "state": "enabled", 01:04:39.367 "thread": "nvmf_tgt_poll_group_000", 01:04:39.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:39.367 "listen_address": { 01:04:39.367 "trtype": "TCP", 01:04:39.367 "adrfam": "IPv4", 01:04:39.367 "traddr": "10.0.0.3", 01:04:39.367 "trsvcid": "4420" 01:04:39.367 }, 01:04:39.367 "peer_address": { 01:04:39.367 "trtype": "TCP", 01:04:39.367 "adrfam": "IPv4", 01:04:39.367 "traddr": "10.0.0.1", 01:04:39.367 "trsvcid": "36366" 01:04:39.367 }, 01:04:39.367 "auth": { 01:04:39.367 "state": "completed", 01:04:39.367 "digest": "sha256", 01:04:39.367 "dhgroup": "ffdhe2048" 01:04:39.367 } 01:04:39.367 } 01:04:39.367 ]' 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:39.367 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:39.625 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:04:39.625 06:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:39.625 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:39.625 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:39.625 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:39.883 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:04:39.883 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:04:40.450 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:40.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:40.450 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:40.450 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.450 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:40.450 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.450 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:04:40.450 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:40.450 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:04:40.450 06:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:40.450 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:41.016 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:41.016 { 01:04:41.016 "cntlid": 17, 01:04:41.016 "qid": 0, 01:04:41.016 "state": "enabled", 01:04:41.016 "thread": "nvmf_tgt_poll_group_000", 01:04:41.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:41.016 "listen_address": { 01:04:41.016 "trtype": "TCP", 01:04:41.016 "adrfam": "IPv4", 01:04:41.016 "traddr": "10.0.0.3", 01:04:41.016 "trsvcid": "4420" 01:04:41.016 }, 01:04:41.016 "peer_address": { 01:04:41.016 "trtype": "TCP", 01:04:41.016 "adrfam": "IPv4", 01:04:41.016 "traddr": "10.0.0.1", 01:04:41.016 "trsvcid": "36406" 01:04:41.016 }, 01:04:41.016 "auth": { 01:04:41.016 "state": "completed", 01:04:41.016 "digest": "sha256", 01:04:41.016 "dhgroup": "ffdhe3072" 01:04:41.016 } 01:04:41.016 } 01:04:41.016 ]' 01:04:41.016 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:41.017 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:41.017 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:41.275 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:04:41.275 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:41.275 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:41.275 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:41.275 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:41.534 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:41.534 06:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:42.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:42.100 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:42.358 01:04:42.621 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:42.621 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:42.621 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:42.621 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:42.621 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:42.621 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:42.621 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:42.621 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:42.621 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:42.621 { 01:04:42.621 "cntlid": 19, 01:04:42.621 "qid": 0, 01:04:42.621 "state": "enabled", 01:04:42.621 "thread": "nvmf_tgt_poll_group_000", 01:04:42.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:42.621 "listen_address": { 01:04:42.621 "trtype": "TCP", 01:04:42.621 "adrfam": "IPv4", 01:04:42.621 "traddr": "10.0.0.3", 01:04:42.621 "trsvcid": "4420" 01:04:42.621 }, 01:04:42.621 "peer_address": { 01:04:42.621 "trtype": "TCP", 01:04:42.621 "adrfam": "IPv4", 01:04:42.621 "traddr": "10.0.0.1", 01:04:42.621 "trsvcid": "33092" 01:04:42.621 }, 01:04:42.621 "auth": { 01:04:42.621 "state": "completed", 01:04:42.621 "digest": "sha256", 01:04:42.621 "dhgroup": "ffdhe3072" 01:04:42.621 } 01:04:42.621 } 01:04:42.621 ]' 01:04:42.621 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:42.621 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:42.621 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:42.880 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:04:42.880 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:42.880 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:42.880 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:42.880 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:43.139 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:43.139 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:43.707 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:43.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:43.707 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:43.707 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:43.707 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:43.708 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:43.967 01:04:43.967 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:43.967 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:43.967 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:44.226 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:44.226 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:44.226 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:44.226 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:44.226 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:44.226 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:44.226 { 01:04:44.226 "cntlid": 21, 01:04:44.226 "qid": 0, 01:04:44.226 "state": "enabled", 01:04:44.226 "thread": "nvmf_tgt_poll_group_000", 01:04:44.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:44.226 "listen_address": { 01:04:44.226 "trtype": "TCP", 01:04:44.226 "adrfam": "IPv4", 01:04:44.226 "traddr": "10.0.0.3", 01:04:44.226 "trsvcid": "4420" 01:04:44.226 }, 01:04:44.226 "peer_address": { 01:04:44.226 "trtype": "TCP", 01:04:44.226 "adrfam": "IPv4", 01:04:44.226 "traddr": "10.0.0.1", 01:04:44.226 "trsvcid": "33120" 01:04:44.226 }, 01:04:44.226 "auth": { 01:04:44.226 "state": "completed", 01:04:44.226 "digest": "sha256", 01:04:44.226 "dhgroup": "ffdhe3072" 01:04:44.226 } 01:04:44.226 } 01:04:44.226 ]' 01:04:44.226 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:44.485 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:44.485 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:44.485 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:04:44.485 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:44.485 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:44.485 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:44.485 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:44.745 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:44.745 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:45.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:04:45.313 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:45.314 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:04:45.314 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:45.314 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:45.314 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:45.314 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:04:45.314 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:45.314 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:45.881 01:04:45.881 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:45.881 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:45.881 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:45.882 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:45.882 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:45.882 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:45.882 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:45.882 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:45.882 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:45.882 { 01:04:45.882 "cntlid": 23, 01:04:45.882 "qid": 0, 01:04:45.882 "state": "enabled", 01:04:45.882 "thread": "nvmf_tgt_poll_group_000", 01:04:45.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:45.882 "listen_address": { 01:04:45.882 "trtype": "TCP", 01:04:45.882 "adrfam": "IPv4", 01:04:45.882 "traddr": "10.0.0.3", 01:04:45.882 "trsvcid": "4420" 01:04:45.882 }, 01:04:45.882 "peer_address": { 01:04:45.882 "trtype": "TCP", 01:04:45.882 "adrfam": "IPv4", 01:04:45.882 "traddr": "10.0.0.1", 01:04:45.882 "trsvcid": "33142" 01:04:45.882 }, 01:04:45.882 "auth": { 01:04:45.882 "state": "completed", 01:04:45.882 "digest": "sha256", 01:04:45.882 "dhgroup": "ffdhe3072" 01:04:45.882 } 01:04:45.882 } 01:04:45.882 ]' 01:04:45.882 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:45.882 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:45.882 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:46.141 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:04:46.141 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:46.141 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:46.141 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:46.141 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:46.141 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:04:46.141 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:04:46.710 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:46.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:46.710 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:46.710 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:46.710 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:46.710 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:46.710 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:04:46.710 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:46.710 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:04:46.710 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:46.970 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:47.230 01:04:47.230 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:47.230 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:47.230 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:47.489 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:47.489 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:47.489 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:47.489 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:47.489 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:47.489 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:47.489 { 01:04:47.489 "cntlid": 25, 01:04:47.489 "qid": 0, 01:04:47.489 "state": "enabled", 01:04:47.489 "thread": "nvmf_tgt_poll_group_000", 01:04:47.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:47.489 "listen_address": { 01:04:47.489 "trtype": "TCP", 01:04:47.489 "adrfam": "IPv4", 01:04:47.489 "traddr": "10.0.0.3", 01:04:47.489 "trsvcid": "4420" 01:04:47.489 }, 01:04:47.489 "peer_address": { 01:04:47.489 "trtype": "TCP", 01:04:47.489 "adrfam": "IPv4", 01:04:47.489 "traddr": "10.0.0.1", 01:04:47.489 "trsvcid": "33174" 01:04:47.489 }, 01:04:47.489 "auth": { 01:04:47.489 "state": "completed", 01:04:47.489 "digest": "sha256", 01:04:47.489 "dhgroup": "ffdhe4096" 01:04:47.489 } 01:04:47.489 } 01:04:47.489 ]' 01:04:47.489 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:47.489 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:47.489 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:47.748 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:04:47.748 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:47.749 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:47.749 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:47.749 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:48.007 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:48.007 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:48.574 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:48.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:48.574 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:48.574 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:48.574 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:48.574 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:48.574 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:48.574 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:04:48.574 06:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:48.574 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:48.834 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:49.092 { 01:04:49.092 "cntlid": 27, 01:04:49.092 "qid": 0, 01:04:49.092 "state": "enabled", 01:04:49.092 "thread": "nvmf_tgt_poll_group_000", 01:04:49.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:49.092 "listen_address": { 01:04:49.092 "trtype": "TCP", 01:04:49.092 "adrfam": "IPv4", 01:04:49.092 "traddr": "10.0.0.3", 01:04:49.092 "trsvcid": "4420" 01:04:49.092 }, 01:04:49.092 "peer_address": { 01:04:49.092 "trtype": "TCP", 01:04:49.092 "adrfam": "IPv4", 01:04:49.092 "traddr": "10.0.0.1", 01:04:49.092 "trsvcid": "33210" 01:04:49.092 }, 01:04:49.092 "auth": { 01:04:49.092 "state": "completed", 01:04:49.092 "digest": "sha256", 01:04:49.092 "dhgroup": "ffdhe4096" 01:04:49.092 } 01:04:49.092 } 01:04:49.092 ]' 01:04:49.092 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:49.351 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:49.351 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:49.351 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:04:49.351 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:49.351 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:49.351 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:49.351 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:49.611 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:49.611 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:50.192 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:50.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:50.192 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:50.192 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:50.192 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:50.192 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:50.192 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:50.192 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:04:50.192 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:50.449 06:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:50.710 01:04:50.710 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:50.710 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:50.710 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:50.984 { 01:04:50.984 "cntlid": 29, 01:04:50.984 "qid": 0, 01:04:50.984 "state": "enabled", 01:04:50.984 "thread": "nvmf_tgt_poll_group_000", 01:04:50.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:50.984 "listen_address": { 01:04:50.984 "trtype": "TCP", 01:04:50.984 "adrfam": "IPv4", 01:04:50.984 "traddr": "10.0.0.3", 01:04:50.984 "trsvcid": "4420" 01:04:50.984 }, 01:04:50.984 "peer_address": { 01:04:50.984 "trtype": "TCP", 01:04:50.984 "adrfam": "IPv4", 01:04:50.984 "traddr": "10.0.0.1", 01:04:50.984 "trsvcid": "33228" 01:04:50.984 }, 01:04:50.984 "auth": { 01:04:50.984 "state": "completed", 01:04:50.984 "digest": "sha256", 01:04:50.984 "dhgroup": "ffdhe4096" 01:04:50.984 } 01:04:50.984 } 01:04:50.984 ]' 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:50.984 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:51.242 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:51.242 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:51.810 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:51.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:51.810 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:51.810 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:51.810 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:51.810 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:51.810 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:51.810 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:04:51.810 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:52.069 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:52.327 01:04:52.327 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:52.327 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:52.327 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:52.586 { 01:04:52.586 "cntlid": 31, 01:04:52.586 "qid": 0, 01:04:52.586 "state": "enabled", 01:04:52.586 "thread": "nvmf_tgt_poll_group_000", 01:04:52.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:52.586 "listen_address": { 01:04:52.586 "trtype": "TCP", 01:04:52.586 "adrfam": "IPv4", 01:04:52.586 "traddr": "10.0.0.3", 01:04:52.586 "trsvcid": "4420" 01:04:52.586 }, 01:04:52.586 "peer_address": { 01:04:52.586 "trtype": "TCP", 01:04:52.586 "adrfam": "IPv4", 01:04:52.586 "traddr": "10.0.0.1", 01:04:52.586 "trsvcid": "55140" 01:04:52.586 }, 01:04:52.586 "auth": { 01:04:52.586 "state": "completed", 01:04:52.586 "digest": "sha256", 01:04:52.586 "dhgroup": "ffdhe4096" 01:04:52.586 } 01:04:52.586 } 01:04:52.586 ]' 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:52.586 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:52.845 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:04:52.845 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:04:53.414 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:53.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:53.414 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:53.414 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:53.414 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:53.414 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:53.414 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:04:53.414 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:53.414 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:04:53.414 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:53.674 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:04:53.934 01:04:53.934 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:53.934 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:53.934 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:54.193 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:54.193 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:54.193 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:54.193 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:54.193 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:54.193 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:54.193 { 01:04:54.193 "cntlid": 33, 01:04:54.193 "qid": 0, 01:04:54.193 "state": "enabled", 01:04:54.193 "thread": "nvmf_tgt_poll_group_000", 01:04:54.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:54.193 "listen_address": { 01:04:54.193 "trtype": "TCP", 01:04:54.193 "adrfam": "IPv4", 01:04:54.193 "traddr": "10.0.0.3", 01:04:54.193 "trsvcid": "4420" 01:04:54.193 }, 01:04:54.193 "peer_address": { 01:04:54.193 "trtype": "TCP", 01:04:54.193 "adrfam": "IPv4", 01:04:54.193 "traddr": "10.0.0.1", 01:04:54.193 "trsvcid": "55166" 01:04:54.193 }, 01:04:54.193 "auth": { 01:04:54.193 "state": "completed", 01:04:54.193 "digest": "sha256", 01:04:54.193 "dhgroup": "ffdhe6144" 01:04:54.193 } 01:04:54.193 } 01:04:54.193 ]' 01:04:54.193 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:54.193 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:54.193 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:54.453 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:04:54.453 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:54.453 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:54.453 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:54.453 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:54.453 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:54.453 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:04:55.022 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:55.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:55.022 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:55.022 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:55.022 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:55.022 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:55.022 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:55.022 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:04:55.022 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:04:55.281 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 01:04:55.281 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:55.281 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:55.282 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:04:55.850 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:55.850 { 01:04:55.850 "cntlid": 35, 01:04:55.850 "qid": 0, 01:04:55.850 "state": "enabled", 01:04:55.850 "thread": "nvmf_tgt_poll_group_000", 01:04:55.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:55.850 "listen_address": { 01:04:55.850 "trtype": "TCP", 01:04:55.850 "adrfam": "IPv4", 01:04:55.850 "traddr": "10.0.0.3", 01:04:55.850 "trsvcid": "4420" 01:04:55.850 }, 01:04:55.850 "peer_address": { 01:04:55.850 "trtype": "TCP", 01:04:55.850 "adrfam": "IPv4", 01:04:55.850 "traddr": "10.0.0.1", 01:04:55.850 "trsvcid": "55202" 01:04:55.850 }, 01:04:55.850 "auth": { 01:04:55.850 "state": "completed", 01:04:55.850 "digest": "sha256", 01:04:55.850 "dhgroup": "ffdhe6144" 01:04:55.850 } 01:04:55.850 } 01:04:55.850 ]' 01:04:55.850 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:56.110 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:56.110 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:56.110 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:04:56.110 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:56.110 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:56.110 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:56.110 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:56.370 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:56.370 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:04:56.939 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:56.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:56.939 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:56.940 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:04:57.560 01:04:57.560 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:57.560 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:57.560 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:57.560 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:57.560 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:57.560 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:57.560 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:57.560 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:57.560 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:57.560 { 01:04:57.560 "cntlid": 37, 01:04:57.560 "qid": 0, 01:04:57.560 "state": "enabled", 01:04:57.560 "thread": "nvmf_tgt_poll_group_000", 01:04:57.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:57.560 "listen_address": { 01:04:57.560 "trtype": "TCP", 01:04:57.560 "adrfam": "IPv4", 01:04:57.560 "traddr": "10.0.0.3", 01:04:57.560 "trsvcid": "4420" 01:04:57.560 }, 01:04:57.560 "peer_address": { 01:04:57.560 "trtype": "TCP", 01:04:57.560 "adrfam": "IPv4", 01:04:57.560 "traddr": "10.0.0.1", 01:04:57.560 "trsvcid": "55234" 01:04:57.560 }, 01:04:57.560 "auth": { 01:04:57.560 "state": "completed", 01:04:57.560 "digest": "sha256", 01:04:57.560 "dhgroup": "ffdhe6144" 01:04:57.560 } 01:04:57.560 } 01:04:57.560 ]' 01:04:57.560 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:57.820 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:57.820 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:57.820 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:04:57.820 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:57.820 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:57.820 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:57.820 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:58.079 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:58.079 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:04:58.648 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:04:58.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:04:58.648 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:58.648 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:58.907 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:58.907 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:04:58.907 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:58.907 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:04:59.167 01:04:59.167 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:04:59.167 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:04:59.167 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:04:59.427 { 01:04:59.427 "cntlid": 39, 01:04:59.427 "qid": 0, 01:04:59.427 "state": "enabled", 01:04:59.427 "thread": "nvmf_tgt_poll_group_000", 01:04:59.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:04:59.427 "listen_address": { 01:04:59.427 "trtype": "TCP", 01:04:59.427 "adrfam": "IPv4", 01:04:59.427 "traddr": "10.0.0.3", 01:04:59.427 "trsvcid": "4420" 01:04:59.427 }, 01:04:59.427 "peer_address": { 01:04:59.427 "trtype": "TCP", 01:04:59.427 "adrfam": "IPv4", 01:04:59.427 "traddr": "10.0.0.1", 01:04:59.427 "trsvcid": "55270" 01:04:59.427 }, 01:04:59.427 "auth": { 01:04:59.427 "state": "completed", 01:04:59.427 "digest": "sha256", 01:04:59.427 "dhgroup": "ffdhe6144" 01:04:59.427 } 01:04:59.427 } 01:04:59.427 ]' 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:04:59.427 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:04:59.687 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:04:59.687 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:00.255 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:00.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:00.255 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:00.255 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:00.255 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:00.255 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:00.255 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:05:00.255 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:00.255 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:00.255 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:00.514 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:01.081 01:05:01.081 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:01.081 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:01.081 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:01.340 { 01:05:01.340 "cntlid": 41, 01:05:01.340 "qid": 0, 01:05:01.340 "state": "enabled", 01:05:01.340 "thread": "nvmf_tgt_poll_group_000", 01:05:01.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:01.340 "listen_address": { 01:05:01.340 "trtype": "TCP", 01:05:01.340 "adrfam": "IPv4", 01:05:01.340 "traddr": "10.0.0.3", 01:05:01.340 "trsvcid": "4420" 01:05:01.340 }, 01:05:01.340 "peer_address": { 01:05:01.340 "trtype": "TCP", 01:05:01.340 "adrfam": "IPv4", 01:05:01.340 "traddr": "10.0.0.1", 01:05:01.340 "trsvcid": "55292" 01:05:01.340 }, 01:05:01.340 "auth": { 01:05:01.340 "state": "completed", 01:05:01.340 "digest": "sha256", 01:05:01.340 "dhgroup": "ffdhe8192" 01:05:01.340 } 01:05:01.340 } 01:05:01.340 ]' 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:01.340 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:01.599 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:01.599 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:02.166 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:02.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:02.166 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:02.166 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:02.166 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:02.166 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:02.166 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:02.166 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:02.166 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:02.424 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:02.992 01:05:02.992 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:02.992 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:02.992 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:02.992 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:02.992 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:02.992 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:02.992 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:03.372 { 01:05:03.372 "cntlid": 43, 01:05:03.372 "qid": 0, 01:05:03.372 "state": "enabled", 01:05:03.372 "thread": "nvmf_tgt_poll_group_000", 01:05:03.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:03.372 "listen_address": { 01:05:03.372 "trtype": "TCP", 01:05:03.372 "adrfam": "IPv4", 01:05:03.372 "traddr": "10.0.0.3", 01:05:03.372 "trsvcid": "4420" 01:05:03.372 }, 01:05:03.372 "peer_address": { 01:05:03.372 "trtype": "TCP", 01:05:03.372 "adrfam": "IPv4", 01:05:03.372 "traddr": "10.0.0.1", 01:05:03.372 "trsvcid": "47590" 01:05:03.372 }, 01:05:03.372 "auth": { 01:05:03.372 "state": "completed", 01:05:03.372 "digest": "sha256", 01:05:03.372 "dhgroup": "ffdhe8192" 01:05:03.372 } 01:05:03.372 } 01:05:03.372 ]' 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:03.372 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:03.939 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:03.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:03.939 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:03.939 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:03.939 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:03.939 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:03.939 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:03.939 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:03.939 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:04.198 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:04.766 01:05:04.766 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:04.766 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:04.766 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:05.025 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:05.025 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:05.025 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:05.025 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:05.025 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:05.025 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:05.025 { 01:05:05.025 "cntlid": 45, 01:05:05.025 "qid": 0, 01:05:05.025 "state": "enabled", 01:05:05.025 "thread": "nvmf_tgt_poll_group_000", 01:05:05.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:05.026 "listen_address": { 01:05:05.026 "trtype": "TCP", 01:05:05.026 "adrfam": "IPv4", 01:05:05.026 "traddr": "10.0.0.3", 01:05:05.026 "trsvcid": "4420" 01:05:05.026 }, 01:05:05.026 "peer_address": { 01:05:05.026 "trtype": "TCP", 01:05:05.026 "adrfam": "IPv4", 01:05:05.026 "traddr": "10.0.0.1", 01:05:05.026 "trsvcid": "47624" 01:05:05.026 }, 01:05:05.026 "auth": { 01:05:05.026 "state": "completed", 01:05:05.026 "digest": "sha256", 01:05:05.026 "dhgroup": "ffdhe8192" 01:05:05.026 } 01:05:05.026 } 01:05:05.026 ]' 01:05:05.026 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:05.026 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:05:05.026 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:05.026 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:05:05.026 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:05.026 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:05.026 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:05.026 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:05.285 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:05.285 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:05.853 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:05.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:05.853 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:05.853 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:05.853 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:05.853 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:05.853 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:05.853 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:05.853 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:06.112 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:06.682 01:05:06.682 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:06.682 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:06.682 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:06.682 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:06.682 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:06.682 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:06.682 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:06.942 { 01:05:06.942 "cntlid": 47, 01:05:06.942 "qid": 0, 01:05:06.942 "state": "enabled", 01:05:06.942 "thread": "nvmf_tgt_poll_group_000", 01:05:06.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:06.942 "listen_address": { 01:05:06.942 "trtype": "TCP", 01:05:06.942 "adrfam": "IPv4", 01:05:06.942 "traddr": "10.0.0.3", 01:05:06.942 "trsvcid": "4420" 01:05:06.942 }, 01:05:06.942 "peer_address": { 01:05:06.942 "trtype": "TCP", 01:05:06.942 "adrfam": "IPv4", 01:05:06.942 "traddr": "10.0.0.1", 01:05:06.942 "trsvcid": "47646" 01:05:06.942 }, 01:05:06.942 "auth": { 01:05:06.942 "state": "completed", 01:05:06.942 "digest": "sha256", 01:05:06.942 "dhgroup": "ffdhe8192" 01:05:06.942 } 01:05:06.942 } 01:05:06.942 ]' 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:06.942 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:07.201 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:07.201 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:07.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:07.770 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:08.029 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:08.029 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:08.029 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:08.029 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:08.029 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:08.288 { 01:05:08.288 "cntlid": 49, 01:05:08.288 "qid": 0, 01:05:08.288 "state": "enabled", 01:05:08.288 "thread": "nvmf_tgt_poll_group_000", 01:05:08.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:08.288 "listen_address": { 01:05:08.288 "trtype": "TCP", 01:05:08.288 "adrfam": "IPv4", 01:05:08.288 "traddr": "10.0.0.3", 01:05:08.288 "trsvcid": "4420" 01:05:08.288 }, 01:05:08.288 "peer_address": { 01:05:08.288 "trtype": "TCP", 01:05:08.288 "adrfam": "IPv4", 01:05:08.288 "traddr": "10.0.0.1", 01:05:08.288 "trsvcid": "47692" 01:05:08.288 }, 01:05:08.288 "auth": { 01:05:08.288 "state": "completed", 01:05:08.288 "digest": "sha384", 01:05:08.288 "dhgroup": "null" 01:05:08.288 } 01:05:08.288 } 01:05:08.288 ]' 01:05:08.288 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:08.548 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:08.548 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:08.548 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:05:08.548 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:08.548 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:08.548 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:08.548 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:08.808 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:08.808 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:09.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:09.378 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:09.637 01:05:09.637 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:09.637 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:09.637 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:09.897 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:09.897 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:09.897 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:09.897 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:09.897 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:09.897 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:09.897 { 01:05:09.897 "cntlid": 51, 01:05:09.897 "qid": 0, 01:05:09.897 "state": "enabled", 01:05:09.897 "thread": "nvmf_tgt_poll_group_000", 01:05:09.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:09.897 "listen_address": { 01:05:09.897 "trtype": "TCP", 01:05:09.897 "adrfam": "IPv4", 01:05:09.897 "traddr": "10.0.0.3", 01:05:09.897 "trsvcid": "4420" 01:05:09.897 }, 01:05:09.897 "peer_address": { 01:05:09.897 "trtype": "TCP", 01:05:09.897 "adrfam": "IPv4", 01:05:09.897 "traddr": "10.0.0.1", 01:05:09.897 "trsvcid": "47718" 01:05:09.897 }, 01:05:09.897 "auth": { 01:05:09.897 "state": "completed", 01:05:09.897 "digest": "sha384", 01:05:09.897 "dhgroup": "null" 01:05:09.897 } 01:05:09.897 } 01:05:09.897 ]' 01:05:09.897 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:09.897 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:09.897 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:10.156 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:05:10.156 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:10.156 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:10.156 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:10.156 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:10.415 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:10.415 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:10.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:10.981 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:11.239 01:05:11.239 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:11.239 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:11.240 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:11.499 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:11.499 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:11.499 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:11.499 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:11.499 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:11.499 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:11.499 { 01:05:11.499 "cntlid": 53, 01:05:11.499 "qid": 0, 01:05:11.499 "state": "enabled", 01:05:11.499 "thread": "nvmf_tgt_poll_group_000", 01:05:11.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:11.499 "listen_address": { 01:05:11.499 "trtype": "TCP", 01:05:11.499 "adrfam": "IPv4", 01:05:11.499 "traddr": "10.0.0.3", 01:05:11.499 "trsvcid": "4420" 01:05:11.499 }, 01:05:11.499 "peer_address": { 01:05:11.499 "trtype": "TCP", 01:05:11.499 "adrfam": "IPv4", 01:05:11.499 "traddr": "10.0.0.1", 01:05:11.499 "trsvcid": "47752" 01:05:11.499 }, 01:05:11.499 "auth": { 01:05:11.499 "state": "completed", 01:05:11.499 "digest": "sha384", 01:05:11.499 "dhgroup": "null" 01:05:11.499 } 01:05:11.499 } 01:05:11.499 ]' 01:05:11.499 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:11.499 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:11.499 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:11.758 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:05:11.758 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:11.758 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:11.758 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:11.758 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:12.016 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:12.016 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:12.583 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:12.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:12.583 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:12.583 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:12.583 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:12.583 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:12.583 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:12.583 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:05:12.583 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:12.583 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:12.841 01:05:12.841 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:12.841 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:12.841 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:13.100 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:13.100 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:13.100 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:13.100 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:13.100 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:13.100 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:13.100 { 01:05:13.100 "cntlid": 55, 01:05:13.100 "qid": 0, 01:05:13.100 "state": "enabled", 01:05:13.100 "thread": "nvmf_tgt_poll_group_000", 01:05:13.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:13.100 "listen_address": { 01:05:13.100 "trtype": "TCP", 01:05:13.100 "adrfam": "IPv4", 01:05:13.100 "traddr": "10.0.0.3", 01:05:13.100 "trsvcid": "4420" 01:05:13.100 }, 01:05:13.100 "peer_address": { 01:05:13.100 "trtype": "TCP", 01:05:13.100 "adrfam": "IPv4", 01:05:13.100 "traddr": "10.0.0.1", 01:05:13.100 "trsvcid": "34796" 01:05:13.100 }, 01:05:13.100 "auth": { 01:05:13.100 "state": "completed", 01:05:13.100 "digest": "sha384", 01:05:13.100 "dhgroup": "null" 01:05:13.100 } 01:05:13.100 } 01:05:13.100 ]' 01:05:13.100 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:13.100 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:13.100 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:13.358 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:05:13.358 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:13.358 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:13.358 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:13.358 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:13.616 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:13.616 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:14.183 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:14.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:14.183 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:14.184 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:14.442 01:05:14.442 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:14.442 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:14.442 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:14.715 { 01:05:14.715 "cntlid": 57, 01:05:14.715 "qid": 0, 01:05:14.715 "state": "enabled", 01:05:14.715 "thread": "nvmf_tgt_poll_group_000", 01:05:14.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:14.715 "listen_address": { 01:05:14.715 "trtype": "TCP", 01:05:14.715 "adrfam": "IPv4", 01:05:14.715 "traddr": "10.0.0.3", 01:05:14.715 "trsvcid": "4420" 01:05:14.715 }, 01:05:14.715 "peer_address": { 01:05:14.715 "trtype": "TCP", 01:05:14.715 "adrfam": "IPv4", 01:05:14.715 "traddr": "10.0.0.1", 01:05:14.715 "trsvcid": "34812" 01:05:14.715 }, 01:05:14.715 "auth": { 01:05:14.715 "state": "completed", 01:05:14.715 "digest": "sha384", 01:05:14.715 "dhgroup": "ffdhe2048" 01:05:14.715 } 01:05:14.715 } 01:05:14.715 ]' 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:05:14.715 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:14.974 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:14.974 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:14.974 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:14.974 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:14.974 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:15.539 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:15.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:15.539 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:15.539 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:15.539 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:15.798 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:16.058 01:05:16.058 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:16.058 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:16.058 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:16.317 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:16.317 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:16.317 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:16.317 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:16.317 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:16.317 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:16.317 { 01:05:16.317 "cntlid": 59, 01:05:16.317 "qid": 0, 01:05:16.317 "state": "enabled", 01:05:16.317 "thread": "nvmf_tgt_poll_group_000", 01:05:16.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:16.317 "listen_address": { 01:05:16.317 "trtype": "TCP", 01:05:16.317 "adrfam": "IPv4", 01:05:16.317 "traddr": "10.0.0.3", 01:05:16.317 "trsvcid": "4420" 01:05:16.317 }, 01:05:16.317 "peer_address": { 01:05:16.317 "trtype": "TCP", 01:05:16.317 "adrfam": "IPv4", 01:05:16.317 "traddr": "10.0.0.1", 01:05:16.317 "trsvcid": "34822" 01:05:16.317 }, 01:05:16.317 "auth": { 01:05:16.317 "state": "completed", 01:05:16.317 "digest": "sha384", 01:05:16.317 "dhgroup": "ffdhe2048" 01:05:16.317 } 01:05:16.317 } 01:05:16.317 ]' 01:05:16.317 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:16.577 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:16.577 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:16.577 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:05:16.577 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:16.577 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:16.577 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:16.577 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:16.835 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:16.835 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:17.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:17.402 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:17.661 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:17.920 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:17.920 { 01:05:17.920 "cntlid": 61, 01:05:17.920 "qid": 0, 01:05:17.920 "state": "enabled", 01:05:17.920 "thread": "nvmf_tgt_poll_group_000", 01:05:17.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:17.920 "listen_address": { 01:05:17.920 "trtype": "TCP", 01:05:17.920 "adrfam": "IPv4", 01:05:17.920 "traddr": "10.0.0.3", 01:05:17.920 "trsvcid": "4420" 01:05:17.920 }, 01:05:17.920 "peer_address": { 01:05:17.920 "trtype": "TCP", 01:05:17.920 "adrfam": "IPv4", 01:05:17.920 "traddr": "10.0.0.1", 01:05:17.920 "trsvcid": "34864" 01:05:17.920 }, 01:05:17.920 "auth": { 01:05:17.920 "state": "completed", 01:05:17.920 "digest": "sha384", 01:05:17.920 "dhgroup": "ffdhe2048" 01:05:17.920 } 01:05:17.920 } 01:05:17.920 ]' 01:05:17.920 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:18.180 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:18.180 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:18.180 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:05:18.180 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:18.180 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:18.180 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:18.180 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:18.439 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:18.439 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:19.007 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:19.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:19.007 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:19.007 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:19.007 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:19.007 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:19.007 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:19.007 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:05:19.007 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:19.267 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:19.526 01:05:19.526 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:19.526 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:19.526 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:19.786 { 01:05:19.786 "cntlid": 63, 01:05:19.786 "qid": 0, 01:05:19.786 "state": "enabled", 01:05:19.786 "thread": "nvmf_tgt_poll_group_000", 01:05:19.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:19.786 "listen_address": { 01:05:19.786 "trtype": "TCP", 01:05:19.786 "adrfam": "IPv4", 01:05:19.786 "traddr": "10.0.0.3", 01:05:19.786 "trsvcid": "4420" 01:05:19.786 }, 01:05:19.786 "peer_address": { 01:05:19.786 "trtype": "TCP", 01:05:19.786 "adrfam": "IPv4", 01:05:19.786 "traddr": "10.0.0.1", 01:05:19.786 "trsvcid": "34898" 01:05:19.786 }, 01:05:19.786 "auth": { 01:05:19.786 "state": "completed", 01:05:19.786 "digest": "sha384", 01:05:19.786 "dhgroup": "ffdhe2048" 01:05:19.786 } 01:05:19.786 } 01:05:19.786 ]' 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:19.786 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:20.045 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:20.045 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:20.613 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:20.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:20.613 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:20.613 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:20.613 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:20.613 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:20.613 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:05:20.613 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:20.613 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:05:20.613 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:20.887 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:20.888 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:20.888 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:21.146 01:05:21.146 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:21.146 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:21.146 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:21.405 { 01:05:21.405 "cntlid": 65, 01:05:21.405 "qid": 0, 01:05:21.405 "state": "enabled", 01:05:21.405 "thread": "nvmf_tgt_poll_group_000", 01:05:21.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:21.405 "listen_address": { 01:05:21.405 "trtype": "TCP", 01:05:21.405 "adrfam": "IPv4", 01:05:21.405 "traddr": "10.0.0.3", 01:05:21.405 "trsvcid": "4420" 01:05:21.405 }, 01:05:21.405 "peer_address": { 01:05:21.405 "trtype": "TCP", 01:05:21.405 "adrfam": "IPv4", 01:05:21.405 "traddr": "10.0.0.1", 01:05:21.405 "trsvcid": "34922" 01:05:21.405 }, 01:05:21.405 "auth": { 01:05:21.405 "state": "completed", 01:05:21.405 "digest": "sha384", 01:05:21.405 "dhgroup": "ffdhe3072" 01:05:21.405 } 01:05:21.405 } 01:05:21.405 ]' 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:21.405 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:21.664 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:21.664 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:22.231 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:22.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:22.231 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:22.231 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:22.231 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:22.231 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:22.231 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:22.231 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:05:22.231 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:22.490 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:22.491 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:22.491 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:22.750 01:05:22.750 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:22.750 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:22.750 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:23.009 { 01:05:23.009 "cntlid": 67, 01:05:23.009 "qid": 0, 01:05:23.009 "state": "enabled", 01:05:23.009 "thread": "nvmf_tgt_poll_group_000", 01:05:23.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:23.009 "listen_address": { 01:05:23.009 "trtype": "TCP", 01:05:23.009 "adrfam": "IPv4", 01:05:23.009 "traddr": "10.0.0.3", 01:05:23.009 "trsvcid": "4420" 01:05:23.009 }, 01:05:23.009 "peer_address": { 01:05:23.009 "trtype": "TCP", 01:05:23.009 "adrfam": "IPv4", 01:05:23.009 "traddr": "10.0.0.1", 01:05:23.009 "trsvcid": "46970" 01:05:23.009 }, 01:05:23.009 "auth": { 01:05:23.009 "state": "completed", 01:05:23.009 "digest": "sha384", 01:05:23.009 "dhgroup": "ffdhe3072" 01:05:23.009 } 01:05:23.009 } 01:05:23.009 ]' 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:23.009 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:23.268 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:23.269 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:23.837 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:23.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:23.837 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:23.837 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:23.837 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:23.837 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:23.837 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:23.837 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:05:23.837 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:24.096 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:24.355 01:05:24.355 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:24.355 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:24.355 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:24.614 { 01:05:24.614 "cntlid": 69, 01:05:24.614 "qid": 0, 01:05:24.614 "state": "enabled", 01:05:24.614 "thread": "nvmf_tgt_poll_group_000", 01:05:24.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:24.614 "listen_address": { 01:05:24.614 "trtype": "TCP", 01:05:24.614 "adrfam": "IPv4", 01:05:24.614 "traddr": "10.0.0.3", 01:05:24.614 "trsvcid": "4420" 01:05:24.614 }, 01:05:24.614 "peer_address": { 01:05:24.614 "trtype": "TCP", 01:05:24.614 "adrfam": "IPv4", 01:05:24.614 "traddr": "10.0.0.1", 01:05:24.614 "trsvcid": "46990" 01:05:24.614 }, 01:05:24.614 "auth": { 01:05:24.614 "state": "completed", 01:05:24.614 "digest": "sha384", 01:05:24.614 "dhgroup": "ffdhe3072" 01:05:24.614 } 01:05:24.614 } 01:05:24.614 ]' 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:24.614 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:24.873 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:24.873 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:25.441 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:25.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:25.441 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:25.441 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:25.441 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:25.441 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:25.441 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:25.441 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:05:25.441 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:25.699 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:25.958 01:05:25.958 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:25.958 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:25.958 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:26.216 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:26.216 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:26.216 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:26.216 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:26.216 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:26.216 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:26.216 { 01:05:26.216 "cntlid": 71, 01:05:26.217 "qid": 0, 01:05:26.217 "state": "enabled", 01:05:26.217 "thread": "nvmf_tgt_poll_group_000", 01:05:26.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:26.217 "listen_address": { 01:05:26.217 "trtype": "TCP", 01:05:26.217 "adrfam": "IPv4", 01:05:26.217 "traddr": "10.0.0.3", 01:05:26.217 "trsvcid": "4420" 01:05:26.217 }, 01:05:26.217 "peer_address": { 01:05:26.217 "trtype": "TCP", 01:05:26.217 "adrfam": "IPv4", 01:05:26.217 "traddr": "10.0.0.1", 01:05:26.217 "trsvcid": "47018" 01:05:26.217 }, 01:05:26.217 "auth": { 01:05:26.217 "state": "completed", 01:05:26.217 "digest": "sha384", 01:05:26.217 "dhgroup": "ffdhe3072" 01:05:26.217 } 01:05:26.217 } 01:05:26.217 ]' 01:05:26.217 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:26.217 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:26.217 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:26.217 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:05:26.217 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:26.475 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:26.475 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:26.475 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:26.475 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:26.475 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:27.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:27.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:27.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:27.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:27.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:27.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:27.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:05:27.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:27.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:05:27.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:27.301 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:27.560 01:05:27.560 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:27.560 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:27.560 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:27.820 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:27.820 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:27.820 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:27.820 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:27.820 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:27.820 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:27.820 { 01:05:27.820 "cntlid": 73, 01:05:27.820 "qid": 0, 01:05:27.820 "state": "enabled", 01:05:27.820 "thread": "nvmf_tgt_poll_group_000", 01:05:27.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:27.820 "listen_address": { 01:05:27.820 "trtype": "TCP", 01:05:27.820 "adrfam": "IPv4", 01:05:27.820 "traddr": "10.0.0.3", 01:05:27.820 "trsvcid": "4420" 01:05:27.820 }, 01:05:27.820 "peer_address": { 01:05:27.820 "trtype": "TCP", 01:05:27.820 "adrfam": "IPv4", 01:05:27.820 "traddr": "10.0.0.1", 01:05:27.820 "trsvcid": "47046" 01:05:27.820 }, 01:05:27.820 "auth": { 01:05:27.820 "state": "completed", 01:05:27.820 "digest": "sha384", 01:05:27.820 "dhgroup": "ffdhe4096" 01:05:27.820 } 01:05:27.820 } 01:05:27.820 ]' 01:05:27.820 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:27.820 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:27.820 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:28.080 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:05:28.080 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:28.080 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:28.080 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:28.080 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:28.339 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:28.339 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:28.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:28.909 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:28.910 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:29.478 01:05:29.478 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:29.478 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:29.478 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:29.478 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:29.478 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:29.478 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:29.478 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:29.478 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:29.478 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:29.478 { 01:05:29.478 "cntlid": 75, 01:05:29.478 "qid": 0, 01:05:29.478 "state": "enabled", 01:05:29.478 "thread": "nvmf_tgt_poll_group_000", 01:05:29.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:29.478 "listen_address": { 01:05:29.478 "trtype": "TCP", 01:05:29.478 "adrfam": "IPv4", 01:05:29.478 "traddr": "10.0.0.3", 01:05:29.478 "trsvcid": "4420" 01:05:29.478 }, 01:05:29.478 "peer_address": { 01:05:29.478 "trtype": "TCP", 01:05:29.478 "adrfam": "IPv4", 01:05:29.478 "traddr": "10.0.0.1", 01:05:29.478 "trsvcid": "47082" 01:05:29.478 }, 01:05:29.478 "auth": { 01:05:29.478 "state": "completed", 01:05:29.478 "digest": "sha384", 01:05:29.478 "dhgroup": "ffdhe4096" 01:05:29.478 } 01:05:29.478 } 01:05:29.478 ]' 01:05:29.478 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:29.737 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:29.737 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:29.737 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:05:29.737 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:29.737 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:29.737 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:29.737 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:30.017 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:30.017 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:30.586 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:30.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:30.586 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:30.586 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:30.586 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:30.586 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:30.586 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:30.586 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:05:30.586 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:30.586 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:31.152 01:05:31.152 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:31.152 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:31.152 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:31.152 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:31.152 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:31.152 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:31.152 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:31.153 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:31.153 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:31.153 { 01:05:31.153 "cntlid": 77, 01:05:31.153 "qid": 0, 01:05:31.153 "state": "enabled", 01:05:31.153 "thread": "nvmf_tgt_poll_group_000", 01:05:31.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:31.153 "listen_address": { 01:05:31.153 "trtype": "TCP", 01:05:31.153 "adrfam": "IPv4", 01:05:31.153 "traddr": "10.0.0.3", 01:05:31.153 "trsvcid": "4420" 01:05:31.153 }, 01:05:31.153 "peer_address": { 01:05:31.153 "trtype": "TCP", 01:05:31.153 "adrfam": "IPv4", 01:05:31.153 "traddr": "10.0.0.1", 01:05:31.153 "trsvcid": "47126" 01:05:31.153 }, 01:05:31.153 "auth": { 01:05:31.153 "state": "completed", 01:05:31.153 "digest": "sha384", 01:05:31.153 "dhgroup": "ffdhe4096" 01:05:31.153 } 01:05:31.153 } 01:05:31.153 ]' 01:05:31.153 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:31.411 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:31.411 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:31.411 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:05:31.411 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:31.411 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:31.411 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:31.411 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:31.670 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:31.670 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:32.237 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:32.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:32.237 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:32.237 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:32.237 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:32.237 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:32.237 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:32.237 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:05:32.237 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:32.497 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:32.756 01:05:32.756 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:32.756 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:32.756 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:33.015 { 01:05:33.015 "cntlid": 79, 01:05:33.015 "qid": 0, 01:05:33.015 "state": "enabled", 01:05:33.015 "thread": "nvmf_tgt_poll_group_000", 01:05:33.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:33.015 "listen_address": { 01:05:33.015 "trtype": "TCP", 01:05:33.015 "adrfam": "IPv4", 01:05:33.015 "traddr": "10.0.0.3", 01:05:33.015 "trsvcid": "4420" 01:05:33.015 }, 01:05:33.015 "peer_address": { 01:05:33.015 "trtype": "TCP", 01:05:33.015 "adrfam": "IPv4", 01:05:33.015 "traddr": "10.0.0.1", 01:05:33.015 "trsvcid": "35382" 01:05:33.015 }, 01:05:33.015 "auth": { 01:05:33.015 "state": "completed", 01:05:33.015 "digest": "sha384", 01:05:33.015 "dhgroup": "ffdhe4096" 01:05:33.015 } 01:05:33.015 } 01:05:33.015 ]' 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:33.015 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:33.274 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:33.274 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:33.842 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:33.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:33.842 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:33.842 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:33.842 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:33.842 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:33.842 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:05:33.842 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:33.842 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:05:33.842 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:34.101 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:34.360 01:05:34.360 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:34.360 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:34.360 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:34.620 { 01:05:34.620 "cntlid": 81, 01:05:34.620 "qid": 0, 01:05:34.620 "state": "enabled", 01:05:34.620 "thread": "nvmf_tgt_poll_group_000", 01:05:34.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:34.620 "listen_address": { 01:05:34.620 "trtype": "TCP", 01:05:34.620 "adrfam": "IPv4", 01:05:34.620 "traddr": "10.0.0.3", 01:05:34.620 "trsvcid": "4420" 01:05:34.620 }, 01:05:34.620 "peer_address": { 01:05:34.620 "trtype": "TCP", 01:05:34.620 "adrfam": "IPv4", 01:05:34.620 "traddr": "10.0.0.1", 01:05:34.620 "trsvcid": "35416" 01:05:34.620 }, 01:05:34.620 "auth": { 01:05:34.620 "state": "completed", 01:05:34.620 "digest": "sha384", 01:05:34.620 "dhgroup": "ffdhe6144" 01:05:34.620 } 01:05:34.620 } 01:05:34.620 ]' 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:05:34.620 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:34.879 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:34.879 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:34.879 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:34.879 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:34.879 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:35.447 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:35.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:35.447 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:35.447 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:35.447 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:35.447 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:35.447 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:35.447 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:05:35.447 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:35.706 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:36.276 01:05:36.276 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:36.276 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:36.276 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:36.277 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:36.277 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:36.277 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:36.277 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:36.277 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:36.277 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:36.277 { 01:05:36.277 "cntlid": 83, 01:05:36.277 "qid": 0, 01:05:36.277 "state": "enabled", 01:05:36.277 "thread": "nvmf_tgt_poll_group_000", 01:05:36.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:36.277 "listen_address": { 01:05:36.277 "trtype": "TCP", 01:05:36.277 "adrfam": "IPv4", 01:05:36.277 "traddr": "10.0.0.3", 01:05:36.277 "trsvcid": "4420" 01:05:36.277 }, 01:05:36.277 "peer_address": { 01:05:36.277 "trtype": "TCP", 01:05:36.277 "adrfam": "IPv4", 01:05:36.277 "traddr": "10.0.0.1", 01:05:36.277 "trsvcid": "35454" 01:05:36.277 }, 01:05:36.277 "auth": { 01:05:36.277 "state": "completed", 01:05:36.277 "digest": "sha384", 01:05:36.277 "dhgroup": "ffdhe6144" 01:05:36.277 } 01:05:36.277 } 01:05:36.277 ]' 01:05:36.277 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:36.536 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:36.536 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:36.536 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:05:36.536 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:36.536 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:36.536 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:36.536 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:36.795 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:36.795 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:37.362 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:37.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:37.363 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:37.931 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:37.931 { 01:05:37.931 "cntlid": 85, 01:05:37.931 "qid": 0, 01:05:37.931 "state": "enabled", 01:05:37.931 "thread": "nvmf_tgt_poll_group_000", 01:05:37.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:37.931 "listen_address": { 01:05:37.931 "trtype": "TCP", 01:05:37.931 "adrfam": "IPv4", 01:05:37.931 "traddr": "10.0.0.3", 01:05:37.931 "trsvcid": "4420" 01:05:37.931 }, 01:05:37.931 "peer_address": { 01:05:37.931 "trtype": "TCP", 01:05:37.931 "adrfam": "IPv4", 01:05:37.931 "traddr": "10.0.0.1", 01:05:37.931 "trsvcid": "35470" 01:05:37.931 }, 01:05:37.931 "auth": { 01:05:37.931 "state": "completed", 01:05:37.931 "digest": "sha384", 01:05:37.931 "dhgroup": "ffdhe6144" 01:05:37.931 } 01:05:37.931 } 01:05:37.931 ]' 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:37.931 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:38.191 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:05:38.191 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:38.191 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:38.191 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:38.191 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:38.450 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:38.450 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:39.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:39.019 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:39.020 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:39.590 01:05:39.590 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:39.590 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:39.590 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:39.590 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:39.590 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:39.590 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:39.590 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:39.590 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:39.590 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:39.590 { 01:05:39.590 "cntlid": 87, 01:05:39.590 "qid": 0, 01:05:39.590 "state": "enabled", 01:05:39.590 "thread": "nvmf_tgt_poll_group_000", 01:05:39.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:39.590 "listen_address": { 01:05:39.590 "trtype": "TCP", 01:05:39.590 "adrfam": "IPv4", 01:05:39.590 "traddr": "10.0.0.3", 01:05:39.590 "trsvcid": "4420" 01:05:39.590 }, 01:05:39.590 "peer_address": { 01:05:39.590 "trtype": "TCP", 01:05:39.590 "adrfam": "IPv4", 01:05:39.590 "traddr": "10.0.0.1", 01:05:39.590 "trsvcid": "35506" 01:05:39.590 }, 01:05:39.590 "auth": { 01:05:39.590 "state": "completed", 01:05:39.590 "digest": "sha384", 01:05:39.590 "dhgroup": "ffdhe6144" 01:05:39.590 } 01:05:39.590 } 01:05:39.590 ]' 01:05:39.590 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:39.850 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:39.850 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:39.850 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:05:39.850 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:39.850 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:39.850 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:39.850 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:40.110 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:40.110 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:40.684 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:40.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:40.684 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:40.684 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:40.684 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:40.684 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:40.685 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:41.292 01:05:41.292 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:41.292 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:41.292 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:41.609 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:41.609 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:41.609 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:41.609 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:41.609 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:41.609 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:41.609 { 01:05:41.609 "cntlid": 89, 01:05:41.609 "qid": 0, 01:05:41.609 "state": "enabled", 01:05:41.609 "thread": "nvmf_tgt_poll_group_000", 01:05:41.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:41.609 "listen_address": { 01:05:41.609 "trtype": "TCP", 01:05:41.609 "adrfam": "IPv4", 01:05:41.609 "traddr": "10.0.0.3", 01:05:41.609 "trsvcid": "4420" 01:05:41.609 }, 01:05:41.609 "peer_address": { 01:05:41.609 "trtype": "TCP", 01:05:41.609 "adrfam": "IPv4", 01:05:41.609 "traddr": "10.0.0.1", 01:05:41.609 "trsvcid": "35538" 01:05:41.609 }, 01:05:41.609 "auth": { 01:05:41.609 "state": "completed", 01:05:41.609 "digest": "sha384", 01:05:41.609 "dhgroup": "ffdhe8192" 01:05:41.609 } 01:05:41.609 } 01:05:41.609 ]' 01:05:41.609 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:41.609 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:41.609 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:41.609 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:05:41.609 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:41.609 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:41.609 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:41.609 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:41.931 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:41.931 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:42.539 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:42.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:42.539 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:42.539 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:42.539 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:42.539 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:42.539 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:42.539 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:05:42.539 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:42.539 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:43.109 01:05:43.109 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:43.109 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:43.109 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:43.368 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:43.368 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:43.368 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:43.368 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:43.368 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:43.368 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:43.368 { 01:05:43.368 "cntlid": 91, 01:05:43.368 "qid": 0, 01:05:43.368 "state": "enabled", 01:05:43.368 "thread": "nvmf_tgt_poll_group_000", 01:05:43.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:43.368 "listen_address": { 01:05:43.368 "trtype": "TCP", 01:05:43.368 "adrfam": "IPv4", 01:05:43.368 "traddr": "10.0.0.3", 01:05:43.368 "trsvcid": "4420" 01:05:43.368 }, 01:05:43.368 "peer_address": { 01:05:43.368 "trtype": "TCP", 01:05:43.368 "adrfam": "IPv4", 01:05:43.368 "traddr": "10.0.0.1", 01:05:43.368 "trsvcid": "55676" 01:05:43.368 }, 01:05:43.368 "auth": { 01:05:43.368 "state": "completed", 01:05:43.368 "digest": "sha384", 01:05:43.368 "dhgroup": "ffdhe8192" 01:05:43.368 } 01:05:43.368 } 01:05:43.368 ]' 01:05:43.368 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:43.369 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:43.369 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:43.369 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:05:43.369 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:43.626 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:43.626 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:43.626 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:43.626 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:43.626 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:44.194 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:44.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:44.194 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:44.194 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:44.194 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:44.194 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:44.194 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:44.194 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:05:44.194 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:44.452 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:45.018 01:05:45.018 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:45.018 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:45.018 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:45.276 { 01:05:45.276 "cntlid": 93, 01:05:45.276 "qid": 0, 01:05:45.276 "state": "enabled", 01:05:45.276 "thread": "nvmf_tgt_poll_group_000", 01:05:45.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:45.276 "listen_address": { 01:05:45.276 "trtype": "TCP", 01:05:45.276 "adrfam": "IPv4", 01:05:45.276 "traddr": "10.0.0.3", 01:05:45.276 "trsvcid": "4420" 01:05:45.276 }, 01:05:45.276 "peer_address": { 01:05:45.276 "trtype": "TCP", 01:05:45.276 "adrfam": "IPv4", 01:05:45.276 "traddr": "10.0.0.1", 01:05:45.276 "trsvcid": "55708" 01:05:45.276 }, 01:05:45.276 "auth": { 01:05:45.276 "state": "completed", 01:05:45.276 "digest": "sha384", 01:05:45.276 "dhgroup": "ffdhe8192" 01:05:45.276 } 01:05:45.276 } 01:05:45.276 ]' 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:45.276 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:45.533 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:45.533 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:46.099 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:46.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:46.099 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:46.099 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:46.099 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:46.099 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:46.099 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:46.099 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:05:46.099 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:46.358 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:46.927 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:46.927 { 01:05:46.927 "cntlid": 95, 01:05:46.927 "qid": 0, 01:05:46.927 "state": "enabled", 01:05:46.927 "thread": "nvmf_tgt_poll_group_000", 01:05:46.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:46.927 "listen_address": { 01:05:46.927 "trtype": "TCP", 01:05:46.927 "adrfam": "IPv4", 01:05:46.927 "traddr": "10.0.0.3", 01:05:46.927 "trsvcid": "4420" 01:05:46.927 }, 01:05:46.927 "peer_address": { 01:05:46.927 "trtype": "TCP", 01:05:46.927 "adrfam": "IPv4", 01:05:46.927 "traddr": "10.0.0.1", 01:05:46.927 "trsvcid": "55748" 01:05:46.927 }, 01:05:46.927 "auth": { 01:05:46.927 "state": "completed", 01:05:46.927 "digest": "sha384", 01:05:46.927 "dhgroup": "ffdhe8192" 01:05:46.927 } 01:05:46.927 } 01:05:46.927 ]' 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:05:46.927 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:47.186 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:05:47.186 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:47.186 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:47.186 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:47.186 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:47.445 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:47.445 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:48.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:48.014 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:48.273 01:05:48.273 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:48.273 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:48.273 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:48.533 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:48.533 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:48.533 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.533 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:48.533 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.534 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:48.534 { 01:05:48.534 "cntlid": 97, 01:05:48.534 "qid": 0, 01:05:48.534 "state": "enabled", 01:05:48.534 "thread": "nvmf_tgt_poll_group_000", 01:05:48.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:48.534 "listen_address": { 01:05:48.534 "trtype": "TCP", 01:05:48.534 "adrfam": "IPv4", 01:05:48.534 "traddr": "10.0.0.3", 01:05:48.534 "trsvcid": "4420" 01:05:48.534 }, 01:05:48.534 "peer_address": { 01:05:48.534 "trtype": "TCP", 01:05:48.534 "adrfam": "IPv4", 01:05:48.534 "traddr": "10.0.0.1", 01:05:48.534 "trsvcid": "55794" 01:05:48.534 }, 01:05:48.534 "auth": { 01:05:48.534 "state": "completed", 01:05:48.534 "digest": "sha512", 01:05:48.534 "dhgroup": "null" 01:05:48.534 } 01:05:48.534 } 01:05:48.534 ]' 01:05:48.534 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:48.534 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:05:48.534 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:48.793 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:05:48.793 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:48.793 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:48.793 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:48.793 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:49.052 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:49.052 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:49.621 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:49.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:49.621 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:49.621 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:49.621 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:49.621 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:49.621 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:49.621 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:05:49.621 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:49.621 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:49.880 01:05:49.880 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:49.880 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:49.880 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:50.138 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:50.138 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:50.138 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.138 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:50.138 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.138 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:50.138 { 01:05:50.138 "cntlid": 99, 01:05:50.138 "qid": 0, 01:05:50.138 "state": "enabled", 01:05:50.138 "thread": "nvmf_tgt_poll_group_000", 01:05:50.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:50.138 "listen_address": { 01:05:50.138 "trtype": "TCP", 01:05:50.138 "adrfam": "IPv4", 01:05:50.138 "traddr": "10.0.0.3", 01:05:50.138 "trsvcid": "4420" 01:05:50.138 }, 01:05:50.138 "peer_address": { 01:05:50.138 "trtype": "TCP", 01:05:50.138 "adrfam": "IPv4", 01:05:50.138 "traddr": "10.0.0.1", 01:05:50.138 "trsvcid": "55826" 01:05:50.138 }, 01:05:50.138 "auth": { 01:05:50.138 "state": "completed", 01:05:50.138 "digest": "sha512", 01:05:50.138 "dhgroup": "null" 01:05:50.138 } 01:05:50.138 } 01:05:50.138 ]' 01:05:50.138 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:50.138 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:05:50.138 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:50.396 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:05:50.396 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:50.396 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:50.396 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:50.396 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:50.654 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:50.654 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:51.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:51.220 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:51.479 01:05:51.479 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:51.479 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:51.479 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:51.737 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:51.737 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:51.737 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.737 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:51.737 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.737 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:51.737 { 01:05:51.737 "cntlid": 101, 01:05:51.737 "qid": 0, 01:05:51.737 "state": "enabled", 01:05:51.737 "thread": "nvmf_tgt_poll_group_000", 01:05:51.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:51.737 "listen_address": { 01:05:51.737 "trtype": "TCP", 01:05:51.737 "adrfam": "IPv4", 01:05:51.737 "traddr": "10.0.0.3", 01:05:51.737 "trsvcid": "4420" 01:05:51.737 }, 01:05:51.737 "peer_address": { 01:05:51.737 "trtype": "TCP", 01:05:51.737 "adrfam": "IPv4", 01:05:51.737 "traddr": "10.0.0.1", 01:05:51.737 "trsvcid": "55848" 01:05:51.737 }, 01:05:51.737 "auth": { 01:05:51.737 "state": "completed", 01:05:51.737 "digest": "sha512", 01:05:51.737 "dhgroup": "null" 01:05:51.737 } 01:05:51.737 } 01:05:51.737 ]' 01:05:51.737 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:51.737 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:05:51.737 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:51.995 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:05:51.995 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:51.995 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:51.995 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:51.995 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:52.253 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:52.253 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:52.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:52.820 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:53.078 01:05:53.078 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:53.078 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:53.078 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:53.338 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:53.338 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:53.338 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.338 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:53.338 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:53.338 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:53.338 { 01:05:53.338 "cntlid": 103, 01:05:53.338 "qid": 0, 01:05:53.338 "state": "enabled", 01:05:53.338 "thread": "nvmf_tgt_poll_group_000", 01:05:53.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:53.338 "listen_address": { 01:05:53.338 "trtype": "TCP", 01:05:53.338 "adrfam": "IPv4", 01:05:53.338 "traddr": "10.0.0.3", 01:05:53.338 "trsvcid": "4420" 01:05:53.338 }, 01:05:53.338 "peer_address": { 01:05:53.338 "trtype": "TCP", 01:05:53.338 "adrfam": "IPv4", 01:05:53.338 "traddr": "10.0.0.1", 01:05:53.338 "trsvcid": "55022" 01:05:53.338 }, 01:05:53.338 "auth": { 01:05:53.338 "state": "completed", 01:05:53.338 "digest": "sha512", 01:05:53.338 "dhgroup": "null" 01:05:53.338 } 01:05:53.338 } 01:05:53.338 ]' 01:05:53.338 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:53.338 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:05:53.338 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:53.598 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:05:53.598 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:53.598 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:53.598 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:53.598 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:53.856 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:53.856 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:54.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:54.422 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.422 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:54.422 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:54.422 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:54.990 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:54.990 { 01:05:54.990 "cntlid": 105, 01:05:54.990 "qid": 0, 01:05:54.990 "state": "enabled", 01:05:54.990 "thread": "nvmf_tgt_poll_group_000", 01:05:54.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:54.990 "listen_address": { 01:05:54.990 "trtype": "TCP", 01:05:54.990 "adrfam": "IPv4", 01:05:54.990 "traddr": "10.0.0.3", 01:05:54.990 "trsvcid": "4420" 01:05:54.990 }, 01:05:54.990 "peer_address": { 01:05:54.990 "trtype": "TCP", 01:05:54.990 "adrfam": "IPv4", 01:05:54.990 "traddr": "10.0.0.1", 01:05:54.990 "trsvcid": "55040" 01:05:54.990 }, 01:05:54.990 "auth": { 01:05:54.990 "state": "completed", 01:05:54.990 "digest": "sha512", 01:05:54.990 "dhgroup": "ffdhe2048" 01:05:54.990 } 01:05:54.990 } 01:05:54.990 ]' 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:05:54.990 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:55.268 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:05:55.268 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:55.268 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:55.268 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:55.268 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:55.268 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:55.268 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:05:55.835 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:55.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:55.835 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:55.835 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:55.835 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:56.093 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.093 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:56.093 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:56.094 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:56.353 01:05:56.353 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:56.353 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:56.353 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:56.612 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:56.612 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:56.612 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.612 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:56.612 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.612 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:56.612 { 01:05:56.612 "cntlid": 107, 01:05:56.612 "qid": 0, 01:05:56.612 "state": "enabled", 01:05:56.612 "thread": "nvmf_tgt_poll_group_000", 01:05:56.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:56.612 "listen_address": { 01:05:56.612 "trtype": "TCP", 01:05:56.612 "adrfam": "IPv4", 01:05:56.612 "traddr": "10.0.0.3", 01:05:56.612 "trsvcid": "4420" 01:05:56.612 }, 01:05:56.612 "peer_address": { 01:05:56.612 "trtype": "TCP", 01:05:56.612 "adrfam": "IPv4", 01:05:56.612 "traddr": "10.0.0.1", 01:05:56.612 "trsvcid": "55068" 01:05:56.612 }, 01:05:56.612 "auth": { 01:05:56.612 "state": "completed", 01:05:56.612 "digest": "sha512", 01:05:56.612 "dhgroup": "ffdhe2048" 01:05:56.612 } 01:05:56.612 } 01:05:56.612 ]' 01:05:56.612 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:56.612 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:05:56.612 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:56.871 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:05:56.871 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:56.871 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:56.871 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:56.871 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:57.130 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:57.130 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:05:57.696 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:57.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:57.696 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:57.954 01:05:57.954 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:57.954 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:57.954 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:58.212 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:58.212 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:58.212 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:58.212 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:58.212 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:58.212 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:58.212 { 01:05:58.212 "cntlid": 109, 01:05:58.212 "qid": 0, 01:05:58.212 "state": "enabled", 01:05:58.212 "thread": "nvmf_tgt_poll_group_000", 01:05:58.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:58.212 "listen_address": { 01:05:58.212 "trtype": "TCP", 01:05:58.212 "adrfam": "IPv4", 01:05:58.212 "traddr": "10.0.0.3", 01:05:58.212 "trsvcid": "4420" 01:05:58.212 }, 01:05:58.212 "peer_address": { 01:05:58.212 "trtype": "TCP", 01:05:58.212 "adrfam": "IPv4", 01:05:58.212 "traddr": "10.0.0.1", 01:05:58.212 "trsvcid": "55100" 01:05:58.212 }, 01:05:58.212 "auth": { 01:05:58.212 "state": "completed", 01:05:58.212 "digest": "sha512", 01:05:58.212 "dhgroup": "ffdhe2048" 01:05:58.212 } 01:05:58.212 } 01:05:58.212 ]' 01:05:58.212 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:05:58.212 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:05:58.470 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:05:58.470 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:05:58.470 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:05:58.470 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:05:58.470 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:05:58.470 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:05:58.729 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:58.729 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:05:59.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:05:59.296 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:59.297 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:05:59.555 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:05:59.815 { 01:05:59.815 "cntlid": 111, 01:05:59.815 "qid": 0, 01:05:59.815 "state": "enabled", 01:05:59.815 "thread": "nvmf_tgt_poll_group_000", 01:05:59.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:05:59.815 "listen_address": { 01:05:59.815 "trtype": "TCP", 01:05:59.815 "adrfam": "IPv4", 01:05:59.815 "traddr": "10.0.0.3", 01:05:59.815 "trsvcid": "4420" 01:05:59.815 }, 01:05:59.815 "peer_address": { 01:05:59.815 "trtype": "TCP", 01:05:59.815 "adrfam": "IPv4", 01:05:59.815 "traddr": "10.0.0.1", 01:05:59.815 "trsvcid": "55124" 01:05:59.815 }, 01:05:59.815 "auth": { 01:05:59.815 "state": "completed", 01:05:59.815 "digest": "sha512", 01:05:59.815 "dhgroup": "ffdhe2048" 01:05:59.815 } 01:05:59.815 } 01:05:59.815 ]' 01:05:59.815 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:00.074 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:00.074 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:00.074 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:06:00.074 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:00.074 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:00.074 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:00.074 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:00.334 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:00.334 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:00.901 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:00.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:00.902 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:01.161 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:01.420 { 01:06:01.420 "cntlid": 113, 01:06:01.420 "qid": 0, 01:06:01.420 "state": "enabled", 01:06:01.420 "thread": "nvmf_tgt_poll_group_000", 01:06:01.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:01.420 "listen_address": { 01:06:01.420 "trtype": "TCP", 01:06:01.420 "adrfam": "IPv4", 01:06:01.420 "traddr": "10.0.0.3", 01:06:01.420 "trsvcid": "4420" 01:06:01.420 }, 01:06:01.420 "peer_address": { 01:06:01.420 "trtype": "TCP", 01:06:01.420 "adrfam": "IPv4", 01:06:01.420 "traddr": "10.0.0.1", 01:06:01.420 "trsvcid": "55152" 01:06:01.420 }, 01:06:01.420 "auth": { 01:06:01.420 "state": "completed", 01:06:01.420 "digest": "sha512", 01:06:01.420 "dhgroup": "ffdhe3072" 01:06:01.420 } 01:06:01.420 } 01:06:01.420 ]' 01:06:01.420 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:01.680 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:01.680 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:01.680 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:06:01.680 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:01.680 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:01.680 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:01.680 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:01.938 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:01.938 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:02.509 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:02.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:02.509 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:02.509 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.509 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:02.509 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.509 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:02.509 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:02.509 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:02.509 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:02.808 01:06:02.808 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:02.808 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:02.808 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:03.083 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:03.083 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:03.083 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.083 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:03.083 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.083 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:03.083 { 01:06:03.083 "cntlid": 115, 01:06:03.083 "qid": 0, 01:06:03.083 "state": "enabled", 01:06:03.083 "thread": "nvmf_tgt_poll_group_000", 01:06:03.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:03.083 "listen_address": { 01:06:03.083 "trtype": "TCP", 01:06:03.083 "adrfam": "IPv4", 01:06:03.083 "traddr": "10.0.0.3", 01:06:03.083 "trsvcid": "4420" 01:06:03.083 }, 01:06:03.084 "peer_address": { 01:06:03.084 "trtype": "TCP", 01:06:03.084 "adrfam": "IPv4", 01:06:03.084 "traddr": "10.0.0.1", 01:06:03.084 "trsvcid": "52120" 01:06:03.084 }, 01:06:03.084 "auth": { 01:06:03.084 "state": "completed", 01:06:03.084 "digest": "sha512", 01:06:03.084 "dhgroup": "ffdhe3072" 01:06:03.084 } 01:06:03.084 } 01:06:03.084 ]' 01:06:03.084 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:03.084 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:03.084 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:03.341 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:06:03.341 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:03.341 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:03.341 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:03.341 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:03.599 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:06:03.599 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:04.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:04.164 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:04.422 01:06:04.422 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:04.423 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:04.423 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:04.680 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:04.680 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:04.680 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.680 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:04.680 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.680 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:04.680 { 01:06:04.680 "cntlid": 117, 01:06:04.680 "qid": 0, 01:06:04.681 "state": "enabled", 01:06:04.681 "thread": "nvmf_tgt_poll_group_000", 01:06:04.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:04.681 "listen_address": { 01:06:04.681 "trtype": "TCP", 01:06:04.681 "adrfam": "IPv4", 01:06:04.681 "traddr": "10.0.0.3", 01:06:04.681 "trsvcid": "4420" 01:06:04.681 }, 01:06:04.681 "peer_address": { 01:06:04.681 "trtype": "TCP", 01:06:04.681 "adrfam": "IPv4", 01:06:04.681 "traddr": "10.0.0.1", 01:06:04.681 "trsvcid": "52146" 01:06:04.681 }, 01:06:04.681 "auth": { 01:06:04.681 "state": "completed", 01:06:04.681 "digest": "sha512", 01:06:04.681 "dhgroup": "ffdhe3072" 01:06:04.681 } 01:06:04.681 } 01:06:04.681 ]' 01:06:04.681 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:04.681 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:04.681 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:04.939 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:06:04.939 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:04.939 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:04.939 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:04.939 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:05.197 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:06:05.197 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:05.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:05.764 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:06.023 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:06.282 { 01:06:06.282 "cntlid": 119, 01:06:06.282 "qid": 0, 01:06:06.282 "state": "enabled", 01:06:06.282 "thread": "nvmf_tgt_poll_group_000", 01:06:06.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:06.282 "listen_address": { 01:06:06.282 "trtype": "TCP", 01:06:06.282 "adrfam": "IPv4", 01:06:06.282 "traddr": "10.0.0.3", 01:06:06.282 "trsvcid": "4420" 01:06:06.282 }, 01:06:06.282 "peer_address": { 01:06:06.282 "trtype": "TCP", 01:06:06.282 "adrfam": "IPv4", 01:06:06.282 "traddr": "10.0.0.1", 01:06:06.282 "trsvcid": "52182" 01:06:06.282 }, 01:06:06.282 "auth": { 01:06:06.282 "state": "completed", 01:06:06.282 "digest": "sha512", 01:06:06.282 "dhgroup": "ffdhe3072" 01:06:06.282 } 01:06:06.282 } 01:06:06.282 ]' 01:06:06.282 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:06.540 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:06.541 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:06.541 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:06:06.541 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:06.541 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:06.541 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:06.541 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:06.799 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:06.800 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:07.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.369 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:07.628 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.628 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:07.628 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:07.629 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:07.888 01:06:07.888 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:07.888 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:07.888 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:08.147 { 01:06:08.147 "cntlid": 121, 01:06:08.147 "qid": 0, 01:06:08.147 "state": "enabled", 01:06:08.147 "thread": "nvmf_tgt_poll_group_000", 01:06:08.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:08.147 "listen_address": { 01:06:08.147 "trtype": "TCP", 01:06:08.147 "adrfam": "IPv4", 01:06:08.147 "traddr": "10.0.0.3", 01:06:08.147 "trsvcid": "4420" 01:06:08.147 }, 01:06:08.147 "peer_address": { 01:06:08.147 "trtype": "TCP", 01:06:08.147 "adrfam": "IPv4", 01:06:08.147 "traddr": "10.0.0.1", 01:06:08.147 "trsvcid": "52220" 01:06:08.147 }, 01:06:08.147 "auth": { 01:06:08.147 "state": "completed", 01:06:08.147 "digest": "sha512", 01:06:08.147 "dhgroup": "ffdhe4096" 01:06:08.147 } 01:06:08.147 } 01:06:08.147 ]' 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:08.147 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:08.406 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:08.406 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:08.975 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:08.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:08.975 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:08.975 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:08.975 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:08.975 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:08.975 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:08.975 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:08.975 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:09.235 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:09.493 01:06:09.493 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:09.493 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:09.493 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:09.752 { 01:06:09.752 "cntlid": 123, 01:06:09.752 "qid": 0, 01:06:09.752 "state": "enabled", 01:06:09.752 "thread": "nvmf_tgt_poll_group_000", 01:06:09.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:09.752 "listen_address": { 01:06:09.752 "trtype": "TCP", 01:06:09.752 "adrfam": "IPv4", 01:06:09.752 "traddr": "10.0.0.3", 01:06:09.752 "trsvcid": "4420" 01:06:09.752 }, 01:06:09.752 "peer_address": { 01:06:09.752 "trtype": "TCP", 01:06:09.752 "adrfam": "IPv4", 01:06:09.752 "traddr": "10.0.0.1", 01:06:09.752 "trsvcid": "52238" 01:06:09.752 }, 01:06:09.752 "auth": { 01:06:09.752 "state": "completed", 01:06:09.752 "digest": "sha512", 01:06:09.752 "dhgroup": "ffdhe4096" 01:06:09.752 } 01:06:09.752 } 01:06:09.752 ]' 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:09.752 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:10.011 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:06:10.011 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:06:10.578 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:10.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:10.578 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:10.578 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:10.578 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:10.578 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:10.578 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:10.578 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:10.578 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:10.836 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:11.093 01:06:11.093 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:11.093 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:11.093 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:11.351 { 01:06:11.351 "cntlid": 125, 01:06:11.351 "qid": 0, 01:06:11.351 "state": "enabled", 01:06:11.351 "thread": "nvmf_tgt_poll_group_000", 01:06:11.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:11.351 "listen_address": { 01:06:11.351 "trtype": "TCP", 01:06:11.351 "adrfam": "IPv4", 01:06:11.351 "traddr": "10.0.0.3", 01:06:11.351 "trsvcid": "4420" 01:06:11.351 }, 01:06:11.351 "peer_address": { 01:06:11.351 "trtype": "TCP", 01:06:11.351 "adrfam": "IPv4", 01:06:11.351 "traddr": "10.0.0.1", 01:06:11.351 "trsvcid": "52262" 01:06:11.351 }, 01:06:11.351 "auth": { 01:06:11.351 "state": "completed", 01:06:11.351 "digest": "sha512", 01:06:11.351 "dhgroup": "ffdhe4096" 01:06:11.351 } 01:06:11.351 } 01:06:11.351 ]' 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:11.351 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:11.609 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:06:11.609 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:06:12.175 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:12.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:12.175 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:12.175 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.175 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:12.175 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.175 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:12.175 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:12.175 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:12.434 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:12.692 01:06:12.692 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:12.692 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:12.692 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:12.950 { 01:06:12.950 "cntlid": 127, 01:06:12.950 "qid": 0, 01:06:12.950 "state": "enabled", 01:06:12.950 "thread": "nvmf_tgt_poll_group_000", 01:06:12.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:12.950 "listen_address": { 01:06:12.950 "trtype": "TCP", 01:06:12.950 "adrfam": "IPv4", 01:06:12.950 "traddr": "10.0.0.3", 01:06:12.950 "trsvcid": "4420" 01:06:12.950 }, 01:06:12.950 "peer_address": { 01:06:12.950 "trtype": "TCP", 01:06:12.950 "adrfam": "IPv4", 01:06:12.950 "traddr": "10.0.0.1", 01:06:12.950 "trsvcid": "57510" 01:06:12.950 }, 01:06:12.950 "auth": { 01:06:12.950 "state": "completed", 01:06:12.950 "digest": "sha512", 01:06:12.950 "dhgroup": "ffdhe4096" 01:06:12.950 } 01:06:12.950 } 01:06:12.950 ]' 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:06:12.950 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:13.208 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:13.208 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:13.208 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:13.208 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:13.208 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:13.776 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:13.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:13.776 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:13.776 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.776 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:13.776 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.776 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:06:13.776 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:13.776 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:13.776 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:14.037 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:14.605 01:06:14.605 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:14.605 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:14.605 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:14.605 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:14.605 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:14.605 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.605 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:14.605 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.605 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:14.605 { 01:06:14.605 "cntlid": 129, 01:06:14.605 "qid": 0, 01:06:14.605 "state": "enabled", 01:06:14.605 "thread": "nvmf_tgt_poll_group_000", 01:06:14.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:14.605 "listen_address": { 01:06:14.605 "trtype": "TCP", 01:06:14.605 "adrfam": "IPv4", 01:06:14.605 "traddr": "10.0.0.3", 01:06:14.605 "trsvcid": "4420" 01:06:14.605 }, 01:06:14.605 "peer_address": { 01:06:14.605 "trtype": "TCP", 01:06:14.605 "adrfam": "IPv4", 01:06:14.605 "traddr": "10.0.0.1", 01:06:14.605 "trsvcid": "57550" 01:06:14.605 }, 01:06:14.605 "auth": { 01:06:14.605 "state": "completed", 01:06:14.605 "digest": "sha512", 01:06:14.605 "dhgroup": "ffdhe6144" 01:06:14.605 } 01:06:14.605 } 01:06:14.605 ]' 01:06:14.605 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:14.864 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:14.864 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:14.864 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:06:14.864 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:14.865 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:14.865 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:14.865 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:15.124 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:15.124 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:15.690 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:15.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.690 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:15.691 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:15.691 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:16.266 01:06:16.266 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:16.266 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:16.266 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:16.266 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:16.266 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:16.266 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.266 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:16.524 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.524 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:16.524 { 01:06:16.524 "cntlid": 131, 01:06:16.524 "qid": 0, 01:06:16.524 "state": "enabled", 01:06:16.524 "thread": "nvmf_tgt_poll_group_000", 01:06:16.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:16.524 "listen_address": { 01:06:16.524 "trtype": "TCP", 01:06:16.524 "adrfam": "IPv4", 01:06:16.524 "traddr": "10.0.0.3", 01:06:16.524 "trsvcid": "4420" 01:06:16.524 }, 01:06:16.524 "peer_address": { 01:06:16.525 "trtype": "TCP", 01:06:16.525 "adrfam": "IPv4", 01:06:16.525 "traddr": "10.0.0.1", 01:06:16.525 "trsvcid": "57566" 01:06:16.525 }, 01:06:16.525 "auth": { 01:06:16.525 "state": "completed", 01:06:16.525 "digest": "sha512", 01:06:16.525 "dhgroup": "ffdhe6144" 01:06:16.525 } 01:06:16.525 } 01:06:16.525 ]' 01:06:16.525 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:16.525 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:16.525 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:16.525 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:06:16.525 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:16.525 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:16.525 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:16.525 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:16.783 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:06:16.783 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:06:17.348 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:17.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:17.348 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:17.348 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:17.348 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:17.348 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:17.348 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:17.348 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:17.349 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:17.606 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:17.864 01:06:17.864 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:17.864 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:17.864 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:18.122 { 01:06:18.122 "cntlid": 133, 01:06:18.122 "qid": 0, 01:06:18.122 "state": "enabled", 01:06:18.122 "thread": "nvmf_tgt_poll_group_000", 01:06:18.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:18.122 "listen_address": { 01:06:18.122 "trtype": "TCP", 01:06:18.122 "adrfam": "IPv4", 01:06:18.122 "traddr": "10.0.0.3", 01:06:18.122 "trsvcid": "4420" 01:06:18.122 }, 01:06:18.122 "peer_address": { 01:06:18.122 "trtype": "TCP", 01:06:18.122 "adrfam": "IPv4", 01:06:18.122 "traddr": "10.0.0.1", 01:06:18.122 "trsvcid": "57606" 01:06:18.122 }, 01:06:18.122 "auth": { 01:06:18.122 "state": "completed", 01:06:18.122 "digest": "sha512", 01:06:18.122 "dhgroup": "ffdhe6144" 01:06:18.122 } 01:06:18.122 } 01:06:18.122 ]' 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:06:18.122 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:18.380 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:18.380 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:18.380 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:18.380 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:06:18.380 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:06:18.946 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:18.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:18.946 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:18.946 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:18.946 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:18.946 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:18.946 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:18.946 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:18.946 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:19.204 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:19.770 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:19.770 { 01:06:19.770 "cntlid": 135, 01:06:19.770 "qid": 0, 01:06:19.770 "state": "enabled", 01:06:19.770 "thread": "nvmf_tgt_poll_group_000", 01:06:19.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:19.770 "listen_address": { 01:06:19.770 "trtype": "TCP", 01:06:19.770 "adrfam": "IPv4", 01:06:19.770 "traddr": "10.0.0.3", 01:06:19.770 "trsvcid": "4420" 01:06:19.770 }, 01:06:19.770 "peer_address": { 01:06:19.770 "trtype": "TCP", 01:06:19.770 "adrfam": "IPv4", 01:06:19.770 "traddr": "10.0.0.1", 01:06:19.770 "trsvcid": "57638" 01:06:19.770 }, 01:06:19.770 "auth": { 01:06:19.770 "state": "completed", 01:06:19.770 "digest": "sha512", 01:06:19.770 "dhgroup": "ffdhe6144" 01:06:19.770 } 01:06:19.770 } 01:06:19.770 ]' 01:06:19.770 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:20.029 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:20.029 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:20.029 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:06:20.029 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:20.029 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:20.029 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:20.029 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:20.289 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:20.289 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:20.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:20.868 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:20.869 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.869 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:20.869 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.869 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:20.869 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:20.869 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:21.439 01:06:21.439 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:21.439 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:21.439 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:21.698 { 01:06:21.698 "cntlid": 137, 01:06:21.698 "qid": 0, 01:06:21.698 "state": "enabled", 01:06:21.698 "thread": "nvmf_tgt_poll_group_000", 01:06:21.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:21.698 "listen_address": { 01:06:21.698 "trtype": "TCP", 01:06:21.698 "adrfam": "IPv4", 01:06:21.698 "traddr": "10.0.0.3", 01:06:21.698 "trsvcid": "4420" 01:06:21.698 }, 01:06:21.698 "peer_address": { 01:06:21.698 "trtype": "TCP", 01:06:21.698 "adrfam": "IPv4", 01:06:21.698 "traddr": "10.0.0.1", 01:06:21.698 "trsvcid": "57660" 01:06:21.698 }, 01:06:21.698 "auth": { 01:06:21.698 "state": "completed", 01:06:21.698 "digest": "sha512", 01:06:21.698 "dhgroup": "ffdhe8192" 01:06:21.698 } 01:06:21.698 } 01:06:21.698 ]' 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:06:21.698 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:21.957 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:21.957 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:21.957 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:21.957 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:21.957 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:22.529 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:22.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:22.529 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:22.529 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:22.530 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:22.530 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:22.530 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:22.530 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:22.530 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:22.789 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:23.356 01:06:23.356 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:23.356 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:23.356 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:23.615 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:23.615 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:23.615 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:23.615 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:23.615 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:23.615 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:23.615 { 01:06:23.615 "cntlid": 139, 01:06:23.615 "qid": 0, 01:06:23.615 "state": "enabled", 01:06:23.615 "thread": "nvmf_tgt_poll_group_000", 01:06:23.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:23.615 "listen_address": { 01:06:23.616 "trtype": "TCP", 01:06:23.616 "adrfam": "IPv4", 01:06:23.616 "traddr": "10.0.0.3", 01:06:23.616 "trsvcid": "4420" 01:06:23.616 }, 01:06:23.616 "peer_address": { 01:06:23.616 "trtype": "TCP", 01:06:23.616 "adrfam": "IPv4", 01:06:23.616 "traddr": "10.0.0.1", 01:06:23.616 "trsvcid": "35744" 01:06:23.616 }, 01:06:23.616 "auth": { 01:06:23.616 "state": "completed", 01:06:23.616 "digest": "sha512", 01:06:23.616 "dhgroup": "ffdhe8192" 01:06:23.616 } 01:06:23.616 } 01:06:23.616 ]' 01:06:23.616 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:23.616 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:23.616 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:23.616 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:06:23.616 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:23.616 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:23.616 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:23.616 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:23.875 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:06:23.875 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: --dhchap-ctrl-secret DHHC-1:02:ODRiYjAyZGM1ODY5OGExOTM5MzdmYWQwZGE4ZTMyNTNjNTJhMzYzODNjZGU4YjA1xOMCWw==: 01:06:24.442 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:24.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:24.442 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:24.442 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:24.442 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:24.442 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:24.442 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:24.442 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:24.442 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:24.727 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:25.294 01:06:25.294 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:25.294 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:25.294 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:25.552 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:25.552 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:25.552 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:25.552 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:25.552 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:25.552 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:25.552 { 01:06:25.552 "cntlid": 141, 01:06:25.552 "qid": 0, 01:06:25.552 "state": "enabled", 01:06:25.552 "thread": "nvmf_tgt_poll_group_000", 01:06:25.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:25.552 "listen_address": { 01:06:25.552 "trtype": "TCP", 01:06:25.552 "adrfam": "IPv4", 01:06:25.552 "traddr": "10.0.0.3", 01:06:25.552 "trsvcid": "4420" 01:06:25.552 }, 01:06:25.552 "peer_address": { 01:06:25.552 "trtype": "TCP", 01:06:25.552 "adrfam": "IPv4", 01:06:25.552 "traddr": "10.0.0.1", 01:06:25.552 "trsvcid": "35772" 01:06:25.552 }, 01:06:25.552 "auth": { 01:06:25.552 "state": "completed", 01:06:25.552 "digest": "sha512", 01:06:25.552 "dhgroup": "ffdhe8192" 01:06:25.552 } 01:06:25.552 } 01:06:25.552 ]' 01:06:25.552 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:25.552 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:25.553 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:25.553 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:06:25.553 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:25.553 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:25.553 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:25.553 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:25.811 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:06:25.811 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:01:NTVjZTk3MGZiMzAzYjNjMzQ1OTJlNzk1NzljY2VkZTXVSGLv: 01:06:26.377 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:26.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:26.377 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:26.377 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:26.377 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:26.377 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:26.377 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:06:26.377 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:26.377 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:26.636 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:27.203 01:06:27.203 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:27.203 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:27.203 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:27.203 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:27.203 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:27.203 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:27.203 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:27.462 { 01:06:27.462 "cntlid": 143, 01:06:27.462 "qid": 0, 01:06:27.462 "state": "enabled", 01:06:27.462 "thread": "nvmf_tgt_poll_group_000", 01:06:27.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:27.462 "listen_address": { 01:06:27.462 "trtype": "TCP", 01:06:27.462 "adrfam": "IPv4", 01:06:27.462 "traddr": "10.0.0.3", 01:06:27.462 "trsvcid": "4420" 01:06:27.462 }, 01:06:27.462 "peer_address": { 01:06:27.462 "trtype": "TCP", 01:06:27.462 "adrfam": "IPv4", 01:06:27.462 "traddr": "10.0.0.1", 01:06:27.462 "trsvcid": "35800" 01:06:27.462 }, 01:06:27.462 "auth": { 01:06:27.462 "state": "completed", 01:06:27.462 "digest": "sha512", 01:06:27.462 "dhgroup": "ffdhe8192" 01:06:27.462 } 01:06:27.462 } 01:06:27.462 ]' 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:27.462 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:27.720 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:27.720 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:28.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:06:28.289 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:28.549 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:29.117 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:29.117 { 01:06:29.117 "cntlid": 145, 01:06:29.117 "qid": 0, 01:06:29.117 "state": "enabled", 01:06:29.117 "thread": "nvmf_tgt_poll_group_000", 01:06:29.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:29.117 "listen_address": { 01:06:29.117 "trtype": "TCP", 01:06:29.117 "adrfam": "IPv4", 01:06:29.117 "traddr": "10.0.0.3", 01:06:29.117 "trsvcid": "4420" 01:06:29.117 }, 01:06:29.117 "peer_address": { 01:06:29.117 "trtype": "TCP", 01:06:29.117 "adrfam": "IPv4", 01:06:29.117 "traddr": "10.0.0.1", 01:06:29.117 "trsvcid": "35824" 01:06:29.117 }, 01:06:29.117 "auth": { 01:06:29.117 "state": "completed", 01:06:29.117 "digest": "sha512", 01:06:29.117 "dhgroup": "ffdhe8192" 01:06:29.117 } 01:06:29.117 } 01:06:29.117 ]' 01:06:29.117 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:29.376 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:29.376 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:29.376 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:06:29.376 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:29.376 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:29.376 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:29.376 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:29.635 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:29.635 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:00:OTUxNjM0ODMzZTk2NWE4MmYxZWU4NjE4M2Y3NTdjMTU5MDRiZWQyOTQ2MDQ1ZGU2WXMajg==: --dhchap-ctrl-secret DHHC-1:03:Y2MzY2ZiZjk1NWI2NGMyOTMyNzIxZWFkZTYzM2RmN2FhYjk3YWU1ZDc3NmY5OTJmMjlmYjAyNThjM2MyNDcyOaG5i8k=: 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:30.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:30.201 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 01:06:30.202 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:06:30.202 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 01:06:30.202 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:06:30.202 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:30.202 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:06:30.202 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:30.202 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 01:06:30.202 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 01:06:30.202 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 01:06:30.459 request: 01:06:30.459 { 01:06:30.459 "name": "nvme0", 01:06:30.459 "trtype": "tcp", 01:06:30.459 "traddr": "10.0.0.3", 01:06:30.459 "adrfam": "ipv4", 01:06:30.459 "trsvcid": "4420", 01:06:30.459 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:06:30.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:30.459 "prchk_reftag": false, 01:06:30.459 "prchk_guard": false, 01:06:30.459 "hdgst": false, 01:06:30.459 "ddgst": false, 01:06:30.459 "dhchap_key": "key2", 01:06:30.459 "allow_unrecognized_csi": false, 01:06:30.459 "method": "bdev_nvme_attach_controller", 01:06:30.459 "req_id": 1 01:06:30.459 } 01:06:30.459 Got JSON-RPC error response 01:06:30.459 response: 01:06:30.459 { 01:06:30.459 "code": -5, 01:06:30.459 "message": "Input/output error" 01:06:30.459 } 01:06:30.717 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:30.718 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:31.043 request: 01:06:31.043 { 01:06:31.043 "name": "nvme0", 01:06:31.043 "trtype": "tcp", 01:06:31.043 "traddr": "10.0.0.3", 01:06:31.043 "adrfam": "ipv4", 01:06:31.043 "trsvcid": "4420", 01:06:31.043 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:06:31.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:31.043 "prchk_reftag": false, 01:06:31.043 "prchk_guard": false, 01:06:31.043 "hdgst": false, 01:06:31.043 "ddgst": false, 01:06:31.043 "dhchap_key": "key1", 01:06:31.043 "dhchap_ctrlr_key": "ckey2", 01:06:31.043 "allow_unrecognized_csi": false, 01:06:31.043 "method": "bdev_nvme_attach_controller", 01:06:31.043 "req_id": 1 01:06:31.043 } 01:06:31.043 Got JSON-RPC error response 01:06:31.043 response: 01:06:31.043 { 01:06:31.043 "code": -5, 01:06:31.043 "message": "Input/output error" 01:06:31.043 } 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:31.043 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:31.303 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:31.561 request: 01:06:31.561 { 01:06:31.561 "name": "nvme0", 01:06:31.561 "trtype": "tcp", 01:06:31.561 "traddr": "10.0.0.3", 01:06:31.561 "adrfam": "ipv4", 01:06:31.561 "trsvcid": "4420", 01:06:31.561 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:06:31.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:31.561 "prchk_reftag": false, 01:06:31.561 "prchk_guard": false, 01:06:31.561 "hdgst": false, 01:06:31.561 "ddgst": false, 01:06:31.561 "dhchap_key": "key1", 01:06:31.561 "dhchap_ctrlr_key": "ckey1", 01:06:31.561 "allow_unrecognized_csi": false, 01:06:31.561 "method": "bdev_nvme_attach_controller", 01:06:31.561 "req_id": 1 01:06:31.561 } 01:06:31.561 Got JSON-RPC error response 01:06:31.561 response: 01:06:31.561 { 01:06:31.561 "code": -5, 01:06:31.561 "message": "Input/output error" 01:06:31.561 } 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67078 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67078 ']' 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67078 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67078 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:06:31.561 killing process with pid 67078 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67078' 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67078 01:06:31.561 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67078 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=69796 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 69796 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69796 ']' 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:31.819 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:32.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 69796 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69796 ']' 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:32.756 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.014 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:33.014 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:06:33.014 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 01:06:33.014 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:33.014 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.273 null0 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DGa 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.vPz ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vPz 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5Rt 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Cqv ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Cqv 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wNf 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Ds2 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ds2 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.A3x 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:33.273 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:34.210 nvme0n1 01:06:34.210 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:06:34.210 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:34.211 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:06:34.211 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:34.211 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:06:34.211 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:34.211 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:34.211 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:34.211 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:06:34.211 { 01:06:34.211 "cntlid": 1, 01:06:34.211 "qid": 0, 01:06:34.211 "state": "enabled", 01:06:34.211 "thread": "nvmf_tgt_poll_group_000", 01:06:34.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:34.211 "listen_address": { 01:06:34.211 "trtype": "TCP", 01:06:34.211 "adrfam": "IPv4", 01:06:34.211 "traddr": "10.0.0.3", 01:06:34.211 "trsvcid": "4420" 01:06:34.211 }, 01:06:34.211 "peer_address": { 01:06:34.211 "trtype": "TCP", 01:06:34.211 "adrfam": "IPv4", 01:06:34.211 "traddr": "10.0.0.1", 01:06:34.211 "trsvcid": "48692" 01:06:34.211 }, 01:06:34.211 "auth": { 01:06:34.211 "state": "completed", 01:06:34.211 "digest": "sha512", 01:06:34.211 "dhgroup": "ffdhe8192" 01:06:34.211 } 01:06:34.211 } 01:06:34.211 ]' 01:06:34.211 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:06:34.470 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:06:34.470 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:06:34.470 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:06:34.470 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:06:34.470 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:06:34.470 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:34.470 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:34.728 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:34.729 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:35.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key3 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 01:06:35.297 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:35.556 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:35.815 request: 01:06:35.815 { 01:06:35.815 "name": "nvme0", 01:06:35.815 "trtype": "tcp", 01:06:35.815 "traddr": "10.0.0.3", 01:06:35.815 "adrfam": "ipv4", 01:06:35.815 "trsvcid": "4420", 01:06:35.815 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:06:35.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:35.815 "prchk_reftag": false, 01:06:35.815 "prchk_guard": false, 01:06:35.815 "hdgst": false, 01:06:35.815 "ddgst": false, 01:06:35.815 "dhchap_key": "key3", 01:06:35.815 "allow_unrecognized_csi": false, 01:06:35.815 "method": "bdev_nvme_attach_controller", 01:06:35.815 "req_id": 1 01:06:35.815 } 01:06:35.815 Got JSON-RPC error response 01:06:35.815 response: 01:06:35.815 { 01:06:35.815 "code": -5, 01:06:35.815 "message": "Input/output error" 01:06:35.815 } 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:35.815 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:06:36.073 request: 01:06:36.073 { 01:06:36.073 "name": "nvme0", 01:06:36.073 "trtype": "tcp", 01:06:36.073 "traddr": "10.0.0.3", 01:06:36.073 "adrfam": "ipv4", 01:06:36.073 "trsvcid": "4420", 01:06:36.073 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:06:36.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:36.073 "prchk_reftag": false, 01:06:36.073 "prchk_guard": false, 01:06:36.073 "hdgst": false, 01:06:36.073 "ddgst": false, 01:06:36.073 "dhchap_key": "key3", 01:06:36.073 "allow_unrecognized_csi": false, 01:06:36.073 "method": "bdev_nvme_attach_controller", 01:06:36.073 "req_id": 1 01:06:36.073 } 01:06:36.073 Got JSON-RPC error response 01:06:36.073 response: 01:06:36.073 { 01:06:36.073 "code": -5, 01:06:36.073 "message": "Input/output error" 01:06:36.073 } 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:06:36.073 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:06:36.346 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:06:36.604 request: 01:06:36.604 { 01:06:36.604 "name": "nvme0", 01:06:36.604 "trtype": "tcp", 01:06:36.604 "traddr": "10.0.0.3", 01:06:36.604 "adrfam": "ipv4", 01:06:36.604 "trsvcid": "4420", 01:06:36.604 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:06:36.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:36.604 "prchk_reftag": false, 01:06:36.604 "prchk_guard": false, 01:06:36.604 "hdgst": false, 01:06:36.604 "ddgst": false, 01:06:36.604 "dhchap_key": "key0", 01:06:36.604 "dhchap_ctrlr_key": "key1", 01:06:36.604 "allow_unrecognized_csi": false, 01:06:36.604 "method": "bdev_nvme_attach_controller", 01:06:36.604 "req_id": 1 01:06:36.604 } 01:06:36.604 Got JSON-RPC error response 01:06:36.604 response: 01:06:36.604 { 01:06:36.604 "code": -5, 01:06:36.604 "message": "Input/output error" 01:06:36.604 } 01:06:36.861 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:06:36.861 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:36.861 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:36.861 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:36.861 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 01:06:36.861 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 01:06:36.861 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 01:06:37.118 nvme0n1 01:06:37.118 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 01:06:37.118 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 01:06:37.118 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:37.118 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:37.118 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:37.118 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:37.376 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 01:06:37.376 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:37.376 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:37.376 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:37.376 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 01:06:37.376 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:06:37.376 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:06:38.309 nvme0n1 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key key3 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:38.309 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 01:06:38.565 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:38.566 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:38.566 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid bac40580-41f0-4da4-8cd9-1be4901a67b8 -l 0 --dhchap-secret DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: --dhchap-ctrl-secret DHHC-1:03:NWE4YzgyZWFmZjRkZWI3ODhiNWM0OGJjOThhN2E1ZGIxNzQ1NzAyM2ZmNTdmYTIwZWRmZWVhMmU4MGI4MWYxMbjsih4=: 01:06:39.130 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 01:06:39.130 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 01:06:39.130 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 01:06:39.130 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 01:06:39.130 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 01:06:39.130 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 01:06:39.130 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 01:06:39.130 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:39.130 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:06:39.388 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:06:39.956 request: 01:06:39.956 { 01:06:39.956 "name": "nvme0", 01:06:39.957 "trtype": "tcp", 01:06:39.957 "traddr": "10.0.0.3", 01:06:39.957 "adrfam": "ipv4", 01:06:39.957 "trsvcid": "4420", 01:06:39.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:06:39.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8", 01:06:39.957 "prchk_reftag": false, 01:06:39.957 "prchk_guard": false, 01:06:39.957 "hdgst": false, 01:06:39.957 "ddgst": false, 01:06:39.957 "dhchap_key": "key1", 01:06:39.957 "allow_unrecognized_csi": false, 01:06:39.957 "method": "bdev_nvme_attach_controller", 01:06:39.957 "req_id": 1 01:06:39.957 } 01:06:39.957 Got JSON-RPC error response 01:06:39.957 response: 01:06:39.957 { 01:06:39.957 "code": -5, 01:06:39.957 "message": "Input/output error" 01:06:39.957 } 01:06:39.957 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:06:39.957 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:39.957 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:39.957 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:39.957 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:06:39.957 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:06:39.957 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:06:40.525 nvme0n1 01:06:40.526 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 01:06:40.526 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 01:06:40.526 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:40.784 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:40.784 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:40.784 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:41.043 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:41.043 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:41.043 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:41.043 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:41.043 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 01:06:41.043 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 01:06:41.043 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 01:06:41.301 nvme0n1 01:06:41.301 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 01:06:41.301 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:41.301 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 01:06:41.559 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:41.559 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 01:06:41.559 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key key3 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: '' 2s 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: ]] 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTYyNmE0ODdlNmMwZGY4YmMwZDAzYWJjYzkwMzE2MjHvYfYw: 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 01:06:41.816 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key1 --dhchap-ctrlr-key key2 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: 2s 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: ]] 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmI0MjE3M2Y1NWNjNzgzMWViZDk3MjQwZTk3M2Y5OTFkODk2Y2E1MDgxNmIwYjI57Wv76Q==: 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 01:06:43.716 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:06:46.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key key1 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:06:46.249 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:06:46.509 nvme0n1 01:06:46.768 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key key3 01:06:46.768 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:46.768 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:46.768 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:46.768 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:06:46.768 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 01:06:47.337 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 01:06:47.597 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 01:06:47.597 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:47.597 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key key3 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:06:47.858 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:06:48.444 request: 01:06:48.444 { 01:06:48.444 "name": "nvme0", 01:06:48.444 "dhchap_key": "key1", 01:06:48.444 "dhchap_ctrlr_key": "key3", 01:06:48.444 "method": "bdev_nvme_set_keys", 01:06:48.444 "req_id": 1 01:06:48.444 } 01:06:48.444 Got JSON-RPC error response 01:06:48.444 response: 01:06:48.444 { 01:06:48.444 "code": -13, 01:06:48.444 "message": "Permission denied" 01:06:48.444 } 01:06:48.444 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:06:48.444 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:48.444 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:48.444 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:48.444 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 01:06:48.444 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 01:06:48.444 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:48.444 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 01:06:48.444 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 01:06:49.830 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 01:06:49.830 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:49.830 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 01:06:49.830 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 01:06:49.830 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key0 --dhchap-ctrlr-key key1 01:06:49.830 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:49.830 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:49.830 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:49.830 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:06:49.830 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:06:49.830 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:06:50.396 nvme0n1 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --dhchap-key key2 --dhchap-ctrlr-key key3 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:06:50.396 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:06:50.961 request: 01:06:50.961 { 01:06:50.961 "name": "nvme0", 01:06:50.961 "dhchap_key": "key2", 01:06:50.961 "dhchap_ctrlr_key": "key0", 01:06:50.961 "method": "bdev_nvme_set_keys", 01:06:50.961 "req_id": 1 01:06:50.961 } 01:06:50.961 Got JSON-RPC error response 01:06:50.961 response: 01:06:50.961 { 01:06:50.961 "code": -13, 01:06:50.961 "message": "Permission denied" 01:06:50.961 } 01:06:50.961 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:06:50.961 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:50.961 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:50.961 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:50.961 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 01:06:50.961 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 01:06:50.961 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:51.219 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 01:06:51.219 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 01:06:52.155 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 01:06:52.155 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 01:06:52.155 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:06:52.414 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 01:06:52.414 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 01:06:52.414 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 01:06:52.414 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67110 01:06:52.414 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67110 ']' 01:06:52.414 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67110 01:06:52.414 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:06:52.414 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:52.414 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67110 01:06:52.414 killing process with pid 67110 01:06:52.415 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:52.415 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:52.415 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67110' 01:06:52.415 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67110 01:06:52.415 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67110 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:06:52.984 rmmod nvme_tcp 01:06:52.984 rmmod nvme_fabrics 01:06:52.984 rmmod nvme_keyring 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 69796 ']' 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 69796 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69796 ']' 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69796 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:52.984 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69796 01:06:53.243 killing process with pid 69796 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69796' 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69796 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69796 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:06:53.243 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:53.516 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:53.516 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:06:53.516 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:53.516 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:53.516 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:53.516 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 01:06:53.516 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DGa /tmp/spdk.key-sha256.5Rt /tmp/spdk.key-sha384.wNf /tmp/spdk.key-sha512.A3x /tmp/spdk.key-sha512.vPz /tmp/spdk.key-sha384.Cqv /tmp/spdk.key-sha256.Ds2 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 01:06:53.775 01:06:53.775 real 2m33.267s 01:06:53.775 user 5m48.388s 01:06:53.775 sys 0m33.886s 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:53.775 ************************************ 01:06:53.775 END TEST nvmf_auth_target 01:06:53.775 ************************************ 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:06:53.775 ************************************ 01:06:53.775 START TEST nvmf_bdevio_no_huge 01:06:53.775 ************************************ 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:06:53.775 * Looking for test storage... 01:06:53.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 01:06:53.775 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:06:54.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:54.035 --rc genhtml_branch_coverage=1 01:06:54.035 --rc genhtml_function_coverage=1 01:06:54.035 --rc genhtml_legend=1 01:06:54.035 --rc geninfo_all_blocks=1 01:06:54.035 --rc geninfo_unexecuted_blocks=1 01:06:54.035 01:06:54.035 ' 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:06:54.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:54.035 --rc genhtml_branch_coverage=1 01:06:54.035 --rc genhtml_function_coverage=1 01:06:54.035 --rc genhtml_legend=1 01:06:54.035 --rc geninfo_all_blocks=1 01:06:54.035 --rc geninfo_unexecuted_blocks=1 01:06:54.035 01:06:54.035 ' 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:06:54.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:54.035 --rc genhtml_branch_coverage=1 01:06:54.035 --rc genhtml_function_coverage=1 01:06:54.035 --rc genhtml_legend=1 01:06:54.035 --rc geninfo_all_blocks=1 01:06:54.035 --rc geninfo_unexecuted_blocks=1 01:06:54.035 01:06:54.035 ' 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:06:54.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:54.035 --rc genhtml_branch_coverage=1 01:06:54.035 --rc genhtml_function_coverage=1 01:06:54.035 --rc genhtml_legend=1 01:06:54.035 --rc geninfo_all_blocks=1 01:06:54.035 --rc geninfo_unexecuted_blocks=1 01:06:54.035 01:06:54.035 ' 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:54.035 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:06:54.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:06:54.036 Cannot find device "nvmf_init_br" 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:06:54.036 Cannot find device "nvmf_init_br2" 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:06:54.036 Cannot find device "nvmf_tgt_br" 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:06:54.036 Cannot find device "nvmf_tgt_br2" 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:06:54.036 Cannot find device "nvmf_init_br" 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:06:54.036 Cannot find device "nvmf_init_br2" 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 01:06:54.036 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:06:54.036 Cannot find device "nvmf_tgt_br" 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:06:54.295 Cannot find device "nvmf_tgt_br2" 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:06:54.295 Cannot find device "nvmf_br" 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:06:54.295 Cannot find device "nvmf_init_if" 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:06:54.295 Cannot find device "nvmf_init_if2" 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:54.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:54.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:06:54.295 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:54.296 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:06:54.555 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:06:54.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:54.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 01:06:54.555 01:06:54.555 --- 10.0.0.3 ping statistics --- 01:06:54.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:54.555 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:06:54.555 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:06:54.555 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 01:06:54.555 01:06:54.555 --- 10.0.0.4 ping statistics --- 01:06:54.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:54.555 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:54.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:54.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 01:06:54.555 01:06:54.555 --- 10.0.0.1 ping statistics --- 01:06:54.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:54.555 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:06:54.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:54.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 01:06:54.555 01:06:54.555 --- 10.0.0.2 ping statistics --- 01:06:54.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:54.555 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70415 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70415 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70415 ']' 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:54.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:54.555 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:54.814 [2024-12-09 06:05:49.148763] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:54.814 [2024-12-09 06:05:49.148831] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 01:06:54.814 [2024-12-09 06:05:49.310782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:06:54.814 [2024-12-09 06:05:49.373013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:54.814 [2024-12-09 06:05:49.373286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:54.814 [2024-12-09 06:05:49.373303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:54.815 [2024-12-09 06:05:49.373313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:54.815 [2024-12-09 06:05:49.373320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:54.815 [2024-12-09 06:05:49.374246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:06:54.815 [2024-12-09 06:05:49.374537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:06:54.815 [2024-12-09 06:05:49.374329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:06:54.815 [2024-12-09 06:05:49.374538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:06:54.815 [2024-12-09 06:05:49.378591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:55.750 [2024-12-09 06:05:50.103974] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:55.750 Malloc0 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:55.750 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:55.750 [2024-12-09 06:05:50.160135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:06:55.751 { 01:06:55.751 "params": { 01:06:55.751 "name": "Nvme$subsystem", 01:06:55.751 "trtype": "$TEST_TRANSPORT", 01:06:55.751 "traddr": "$NVMF_FIRST_TARGET_IP", 01:06:55.751 "adrfam": "ipv4", 01:06:55.751 "trsvcid": "$NVMF_PORT", 01:06:55.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:06:55.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:06:55.751 "hdgst": ${hdgst:-false}, 01:06:55.751 "ddgst": ${ddgst:-false} 01:06:55.751 }, 01:06:55.751 "method": "bdev_nvme_attach_controller" 01:06:55.751 } 01:06:55.751 EOF 01:06:55.751 )") 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 01:06:55.751 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:06:55.751 "params": { 01:06:55.751 "name": "Nvme1", 01:06:55.751 "trtype": "tcp", 01:06:55.751 "traddr": "10.0.0.3", 01:06:55.751 "adrfam": "ipv4", 01:06:55.751 "trsvcid": "4420", 01:06:55.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:06:55.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:06:55.751 "hdgst": false, 01:06:55.751 "ddgst": false 01:06:55.751 }, 01:06:55.751 "method": "bdev_nvme_attach_controller" 01:06:55.751 }' 01:06:55.751 [2024-12-09 06:05:50.219143] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:55.751 [2024-12-09 06:05:50.219330] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70451 ] 01:06:56.010 [2024-12-09 06:05:50.377826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:06:56.010 [2024-12-09 06:05:50.442120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:56.010 [2024-12-09 06:05:50.442283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:06:56.010 [2024-12-09 06:05:50.442284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:06:56.010 [2024-12-09 06:05:50.454338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:06:56.269 I/O targets: 01:06:56.269 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:06:56.269 01:06:56.269 01:06:56.269 CUnit - A unit testing framework for C - Version 2.1-3 01:06:56.269 http://cunit.sourceforge.net/ 01:06:56.269 01:06:56.269 01:06:56.269 Suite: bdevio tests on: Nvme1n1 01:06:56.269 Test: blockdev write read block ...passed 01:06:56.269 Test: blockdev write zeroes read block ...passed 01:06:56.269 Test: blockdev write zeroes read no split ...passed 01:06:56.269 Test: blockdev write zeroes read split ...passed 01:06:56.269 Test: blockdev write zeroes read split partial ...passed 01:06:56.269 Test: blockdev reset ...[2024-12-09 06:05:50.689856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:06:56.269 [2024-12-09 06:05:50.690062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f2e90 (9): Bad file descriptor 01:06:56.269 [2024-12-09 06:05:50.707018] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:06:56.269 passed 01:06:56.269 Test: blockdev write read 8 blocks ...passed 01:06:56.269 Test: blockdev write read size > 128k ...passed 01:06:56.269 Test: blockdev write read invalid size ...passed 01:06:56.269 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:06:56.269 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:06:56.269 Test: blockdev write read max offset ...passed 01:06:56.269 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:06:56.269 Test: blockdev writev readv 8 blocks ...passed 01:06:56.269 Test: blockdev writev readv 30 x 1block ...passed 01:06:56.269 Test: blockdev writev readv block ...passed 01:06:56.269 Test: blockdev writev readv size > 128k ...passed 01:06:56.269 Test: blockdev writev readv size > 128k in two iovs ...passed 01:06:56.269 Test: blockdev comparev and writev ...[2024-12-09 06:05:50.717563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:06:56.269 [2024-12-09 06:05:50.717730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.717753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:06:56.269 [2024-12-09 06:05:50.717765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.718241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:06:56.269 [2024-12-09 06:05:50.718259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.718274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:06:56.269 [2024-12-09 06:05:50.718284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.718747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:06:56.269 [2024-12-09 06:05:50.718763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.718777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:06:56.269 [2024-12-09 06:05:50.718787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.719250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:06:56.269 [2024-12-09 06:05:50.719267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.719281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:06:56.269 [2024-12-09 06:05:50.719291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:06:56.269 passed 01:06:56.269 Test: blockdev nvme passthru rw ...passed 01:06:56.269 Test: blockdev nvme passthru vendor specific ...[2024-12-09 06:05:50.720264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:06:56.269 [2024-12-09 06:05:50.720284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.720451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:06:56.269 [2024-12-09 06:05:50.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.720605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:06:56.269 [2024-12-09 06:05:50.720621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:06:56.269 [2024-12-09 06:05:50.720782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOpassed 01:06:56.269 Test: blockdev nvme admin passthru ...CK OFFSET 0x0 len:0x0 01:06:56.269 [2024-12-09 06:05:50.720901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:06:56.269 passed 01:06:56.269 Test: blockdev copy ...passed 01:06:56.269 01:06:56.269 Run Summary: Type Total Ran Passed Failed Inactive 01:06:56.269 suites 1 1 n/a 0 0 01:06:56.269 tests 23 23 23 0 0 01:06:56.269 asserts 152 152 152 0 n/a 01:06:56.269 01:06:56.269 Elapsed time = 0.187 seconds 01:06:56.528 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:06:56.528 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:56.528 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:56.528 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:56.528 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:06:56.528 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 01:06:56.528 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 01:06:56.528 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:06:56.788 rmmod nvme_tcp 01:06:56.788 rmmod nvme_fabrics 01:06:56.788 rmmod nvme_keyring 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70415 ']' 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70415 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70415 ']' 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70415 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70415 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:06:56.788 killing process with pid 70415 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70415' 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70415 01:06:56.788 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70415 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:06:57.047 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:57.306 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:57.566 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 01:06:57.566 01:06:57.566 real 0m3.723s 01:06:57.566 user 0m10.243s 01:06:57.566 sys 0m1.588s 01:06:57.566 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:57.566 ************************************ 01:06:57.566 END TEST nvmf_bdevio_no_huge 01:06:57.566 ************************************ 01:06:57.566 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:06:57.566 06:05:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:06:57.566 06:05:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:06:57.566 06:05:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:57.566 06:05:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:06:57.566 ************************************ 01:06:57.566 START TEST nvmf_tls 01:06:57.566 ************************************ 01:06:57.566 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:06:57.566 * Looking for test storage... 01:06:57.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:06:57.566 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:06:57.566 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 01:06:57.566 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:06:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:57.826 --rc genhtml_branch_coverage=1 01:06:57.826 --rc genhtml_function_coverage=1 01:06:57.826 --rc genhtml_legend=1 01:06:57.826 --rc geninfo_all_blocks=1 01:06:57.826 --rc geninfo_unexecuted_blocks=1 01:06:57.826 01:06:57.826 ' 01:06:57.826 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:06:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:57.826 --rc genhtml_branch_coverage=1 01:06:57.826 --rc genhtml_function_coverage=1 01:06:57.826 --rc genhtml_legend=1 01:06:57.827 --rc geninfo_all_blocks=1 01:06:57.827 --rc geninfo_unexecuted_blocks=1 01:06:57.827 01:06:57.827 ' 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:06:57.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:57.827 --rc genhtml_branch_coverage=1 01:06:57.827 --rc genhtml_function_coverage=1 01:06:57.827 --rc genhtml_legend=1 01:06:57.827 --rc geninfo_all_blocks=1 01:06:57.827 --rc geninfo_unexecuted_blocks=1 01:06:57.827 01:06:57.827 ' 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:06:57.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:57.827 --rc genhtml_branch_coverage=1 01:06:57.827 --rc genhtml_function_coverage=1 01:06:57.827 --rc genhtml_legend=1 01:06:57.827 --rc geninfo_all_blocks=1 01:06:57.827 --rc geninfo_unexecuted_blocks=1 01:06:57.827 01:06:57.827 ' 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:06:57.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:06:57.827 Cannot find device "nvmf_init_br" 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:06:57.827 Cannot find device "nvmf_init_br2" 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:06:57.827 Cannot find device "nvmf_tgt_br" 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 01:06:57.827 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:06:57.828 Cannot find device "nvmf_tgt_br2" 01:06:57.828 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 01:06:57.828 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:06:57.828 Cannot find device "nvmf_init_br" 01:06:57.828 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 01:06:57.828 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:06:57.828 Cannot find device "nvmf_init_br2" 01:06:57.828 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 01:06:57.828 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:06:57.828 Cannot find device "nvmf_tgt_br" 01:06:57.828 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 01:06:57.828 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:06:58.087 Cannot find device "nvmf_tgt_br2" 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:06:58.087 Cannot find device "nvmf_br" 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:06:58.087 Cannot find device "nvmf_init_if" 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:06:58.087 Cannot find device "nvmf_init_if2" 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:58.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:58.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:06:58.087 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:06:58.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:58.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 01:06:58.362 01:06:58.362 --- 10.0.0.3 ping statistics --- 01:06:58.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:58.362 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:06:58.362 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:06:58.362 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 01:06:58.362 01:06:58.362 --- 10.0.0.4 ping statistics --- 01:06:58.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:58.362 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:58.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:58.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 01:06:58.362 01:06:58.362 --- 10.0.0.1 ping statistics --- 01:06:58.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:58.362 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:06:58.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:58.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 01:06:58.362 01:06:58.362 --- 10.0.0.2 ping statistics --- 01:06:58.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:58.362 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:06:58.362 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70689 01:06:58.363 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:06:58.363 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70689 01:06:58.363 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70689 ']' 01:06:58.363 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:58.363 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:58.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:58.363 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:58.363 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:58.363 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:06:58.363 [2024-12-09 06:05:52.916514] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:58.363 [2024-12-09 06:05:52.916575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:58.674 [2024-12-09 06:05:53.068877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:58.674 [2024-12-09 06:05:53.124532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:58.674 [2024-12-09 06:05:53.124570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:58.674 [2024-12-09 06:05:53.124580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:58.674 [2024-12-09 06:05:53.124588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:58.674 [2024-12-09 06:05:53.124594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:58.674 [2024-12-09 06:05:53.124947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:59.264 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:59.264 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:06:59.264 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:06:59.264 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:06:59.264 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:06:59.264 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:59.264 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 01:06:59.264 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 01:06:59.523 true 01:06:59.523 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 01:06:59.523 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:06:59.782 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 01:06:59.782 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 01:06:59.782 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:07:00.041 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:07:00.041 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 01:07:00.041 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 01:07:00.041 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 01:07:00.041 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 01:07:00.300 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:07:00.300 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 01:07:00.559 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 01:07:00.559 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 01:07:00.559 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:07:00.560 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 01:07:00.818 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 01:07:00.818 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 01:07:00.818 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 01:07:01.076 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 01:07:01.076 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:07:01.076 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 01:07:01.076 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 01:07:01.076 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 01:07:01.336 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:07:01.336 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3IbFm29waC 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Y1JHo4z6jh 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3IbFm29waC 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Y1JHo4z6jh 01:07:01.595 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:07:01.854 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 01:07:02.113 [2024-12-09 06:05:56.587739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:02.113 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3IbFm29waC 01:07:02.113 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3IbFm29waC 01:07:02.113 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:07:02.372 [2024-12-09 06:05:56.846538] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:02.372 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:07:02.630 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:07:02.889 [2024-12-09 06:05:57.221946] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:07:02.889 [2024-12-09 06:05:57.222190] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:07:02.889 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:07:02.889 malloc0 01:07:02.889 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:07:03.148 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3IbFm29waC 01:07:03.407 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:07:03.666 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3IbFm29waC 01:07:15.893 Initializing NVMe Controllers 01:07:15.893 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:07:15.893 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:07:15.893 Initialization complete. Launching workers. 01:07:15.893 ======================================================== 01:07:15.893 Latency(us) 01:07:15.893 Device Information : IOPS MiB/s Average min max 01:07:15.893 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14693.96 57.40 4355.95 851.62 5322.06 01:07:15.893 ======================================================== 01:07:15.893 Total : 14693.96 57.40 4355.95 851.62 5322.06 01:07:15.893 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3IbFm29waC 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3IbFm29waC 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70916 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70916 /var/tmp/bdevperf.sock 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70916 ']' 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:15.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:15.893 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:15.893 [2024-12-09 06:06:08.320612] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:15.893 [2024-12-09 06:06:08.320672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70916 ] 01:07:15.893 [2024-12-09 06:06:08.469016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:15.893 [2024-12-09 06:06:08.509459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:15.893 [2024-12-09 06:06:08.550374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:15.893 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:15.893 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:15.893 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3IbFm29waC 01:07:15.893 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:07:15.893 [2024-12-09 06:06:09.537386] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:07:15.893 TLSTESTn1 01:07:15.893 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:07:15.893 Running I/O for 10 seconds... 01:07:17.530 5729.00 IOPS, 22.38 MiB/s [2024-12-09T06:06:13.051Z] 5759.00 IOPS, 22.50 MiB/s [2024-12-09T06:06:13.988Z] 5768.00 IOPS, 22.53 MiB/s [2024-12-09T06:06:14.926Z] 5772.75 IOPS, 22.55 MiB/s [2024-12-09T06:06:15.863Z] 5776.60 IOPS, 22.56 MiB/s [2024-12-09T06:06:16.800Z] 5773.00 IOPS, 22.55 MiB/s [2024-12-09T06:06:17.738Z] 5775.29 IOPS, 22.56 MiB/s [2024-12-09T06:06:19.116Z] 5781.62 IOPS, 22.58 MiB/s [2024-12-09T06:06:19.683Z] 5779.89 IOPS, 22.58 MiB/s [2024-12-09T06:06:19.943Z] 5778.80 IOPS, 22.57 MiB/s 01:07:25.356 Latency(us) 01:07:25.356 [2024-12-09T06:06:19.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:25.356 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:07:25.356 Verification LBA range: start 0x0 length 0x2000 01:07:25.356 TLSTESTn1 : 10.01 5784.09 22.59 0.00 0.00 22096.40 4553.30 17370.99 01:07:25.356 [2024-12-09T06:06:19.943Z] =================================================================================================================== 01:07:25.356 [2024-12-09T06:06:19.943Z] Total : 5784.09 22.59 0.00 0.00 22096.40 4553.30 17370.99 01:07:25.356 { 01:07:25.356 "results": [ 01:07:25.356 { 01:07:25.356 "job": "TLSTESTn1", 01:07:25.356 "core_mask": "0x4", 01:07:25.356 "workload": "verify", 01:07:25.356 "status": "finished", 01:07:25.356 "verify_range": { 01:07:25.356 "start": 0, 01:07:25.356 "length": 8192 01:07:25.356 }, 01:07:25.356 "queue_depth": 128, 01:07:25.356 "io_size": 4096, 01:07:25.356 "runtime": 10.012815, 01:07:25.356 "iops": 5784.087691623185, 01:07:25.356 "mibps": 22.594092545403065, 01:07:25.356 "io_failed": 0, 01:07:25.356 "io_timeout": 0, 01:07:25.356 "avg_latency_us": 22096.40416300443, 01:07:25.356 "min_latency_us": 4553.304417670683, 01:07:25.356 "max_latency_us": 17370.98795180723 01:07:25.356 } 01:07:25.356 ], 01:07:25.356 "core_count": 1 01:07:25.356 } 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70916 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70916 ']' 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70916 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70916 01:07:25.356 killing process with pid 70916 01:07:25.356 Received shutdown signal, test time was about 10.000000 seconds 01:07:25.356 01:07:25.356 Latency(us) 01:07:25.356 [2024-12-09T06:06:19.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:25.356 [2024-12-09T06:06:19.943Z] =================================================================================================================== 01:07:25.356 [2024-12-09T06:06:19.943Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70916' 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70916 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70916 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y1JHo4z6jh 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y1JHo4z6jh 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y1JHo4z6jh 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Y1JHo4z6jh 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:25.356 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71056 01:07:25.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:25.616 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:07:25.616 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:25.616 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71056 /var/tmp/bdevperf.sock 01:07:25.616 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71056 ']' 01:07:25.616 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:25.616 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:25.616 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:25.616 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:25.616 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:25.616 [2024-12-09 06:06:19.990562] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:25.616 [2024-12-09 06:06:19.990646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71056 ] 01:07:25.616 [2024-12-09 06:06:20.133496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:25.616 [2024-12-09 06:06:20.173233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:25.875 [2024-12-09 06:06:20.214023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:26.444 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:26.444 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:26.444 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Y1JHo4z6jh 01:07:26.704 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:07:26.704 [2024-12-09 06:06:21.220977] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:07:26.704 [2024-12-09 06:06:21.226999] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:07:26.704 [2024-12-09 06:06:21.227174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014030 (107): Transport endpoint is not connected 01:07:26.704 [2024-12-09 06:06:21.228161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014030 (9): Bad file descriptor 01:07:26.704 [2024-12-09 06:06:21.229159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 01:07:26.704 [2024-12-09 06:06:21.229181] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:07:26.704 [2024-12-09 06:06:21.229190] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 01:07:26.704 [2024-12-09 06:06:21.229203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 01:07:26.704 request: 01:07:26.704 { 01:07:26.704 "name": "TLSTEST", 01:07:26.704 "trtype": "tcp", 01:07:26.704 "traddr": "10.0.0.3", 01:07:26.704 "adrfam": "ipv4", 01:07:26.704 "trsvcid": "4420", 01:07:26.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:26.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:26.704 "prchk_reftag": false, 01:07:26.704 "prchk_guard": false, 01:07:26.704 "hdgst": false, 01:07:26.704 "ddgst": false, 01:07:26.704 "psk": "key0", 01:07:26.704 "allow_unrecognized_csi": false, 01:07:26.704 "method": "bdev_nvme_attach_controller", 01:07:26.704 "req_id": 1 01:07:26.704 } 01:07:26.704 Got JSON-RPC error response 01:07:26.704 response: 01:07:26.704 { 01:07:26.704 "code": -5, 01:07:26.704 "message": "Input/output error" 01:07:26.704 } 01:07:26.704 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71056 01:07:26.704 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71056 ']' 01:07:26.704 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71056 01:07:26.704 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:26.704 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:26.705 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71056 01:07:26.964 killing process with pid 71056 01:07:26.964 Received shutdown signal, test time was about 10.000000 seconds 01:07:26.964 01:07:26.964 Latency(us) 01:07:26.964 [2024-12-09T06:06:21.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:26.964 [2024-12-09T06:06:21.551Z] =================================================================================================================== 01:07:26.964 [2024-12-09T06:06:21.551Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71056' 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71056 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71056 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:07:26.964 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3IbFm29waC 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3IbFm29waC 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3IbFm29waC 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3IbFm29waC 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71079 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71079 /var/tmp/bdevperf.sock 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71079 ']' 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:26.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:26.965 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:26.965 [2024-12-09 06:06:21.506764] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:26.965 [2024-12-09 06:06:21.506844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71079 ] 01:07:27.224 [2024-12-09 06:06:21.659227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:27.224 [2024-12-09 06:06:21.701827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:27.224 [2024-12-09 06:06:21.742685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:28.161 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:28.161 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:28.161 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3IbFm29waC 01:07:28.161 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 01:07:28.420 [2024-12-09 06:06:22.781564] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:07:28.420 [2024-12-09 06:06:22.785951] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:07:28.420 [2024-12-09 06:06:22.785988] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:07:28.420 [2024-12-09 06:06:22.786043] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:07:28.420 [2024-12-09 06:06:22.786732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d030 (107): Transport endpoint is not connected 01:07:28.420 [2024-12-09 06:06:22.787717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d030 (9): Bad file descriptor 01:07:28.420 [2024-12-09 06:06:22.788714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 01:07:28.420 [2024-12-09 06:06:22.788734] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:07:28.420 [2024-12-09 06:06:22.788744] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 01:07:28.420 [2024-12-09 06:06:22.788757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 01:07:28.420 request: 01:07:28.420 { 01:07:28.420 "name": "TLSTEST", 01:07:28.420 "trtype": "tcp", 01:07:28.420 "traddr": "10.0.0.3", 01:07:28.420 "adrfam": "ipv4", 01:07:28.420 "trsvcid": "4420", 01:07:28.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:28.420 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:07:28.420 "prchk_reftag": false, 01:07:28.420 "prchk_guard": false, 01:07:28.420 "hdgst": false, 01:07:28.420 "ddgst": false, 01:07:28.420 "psk": "key0", 01:07:28.420 "allow_unrecognized_csi": false, 01:07:28.420 "method": "bdev_nvme_attach_controller", 01:07:28.420 "req_id": 1 01:07:28.420 } 01:07:28.420 Got JSON-RPC error response 01:07:28.420 response: 01:07:28.420 { 01:07:28.420 "code": -5, 01:07:28.420 "message": "Input/output error" 01:07:28.420 } 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71079 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71079 ']' 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71079 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71079 01:07:28.420 killing process with pid 71079 01:07:28.420 Received shutdown signal, test time was about 10.000000 seconds 01:07:28.420 01:07:28.420 Latency(us) 01:07:28.420 [2024-12-09T06:06:23.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:28.420 [2024-12-09T06:06:23.007Z] =================================================================================================================== 01:07:28.420 [2024-12-09T06:06:23.007Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71079' 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71079 01:07:28.420 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71079 01:07:28.420 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:07:28.420 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:07:28.420 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:07:28.420 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:07:28.420 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:07:28.420 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3IbFm29waC 01:07:28.420 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:07:28.420 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3IbFm29waC 01:07:28.420 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3IbFm29waC 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3IbFm29waC 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71113 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71113 /var/tmp/bdevperf.sock 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71113 ']' 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:28.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:28.679 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:28.679 [2024-12-09 06:06:23.062792] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:28.679 [2024-12-09 06:06:23.062878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71113 ] 01:07:28.679 [2024-12-09 06:06:23.194617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:28.679 [2024-12-09 06:06:23.236824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:28.936 [2024-12-09 06:06:23.278128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:29.503 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:29.503 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:29.503 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3IbFm29waC 01:07:29.762 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 01:07:29.762 [2024-12-09 06:06:24.309075] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:07:29.762 [2024-12-09 06:06:24.313506] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:07:29.762 [2024-12-09 06:06:24.313543] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:07:29.762 [2024-12-09 06:06:24.313590] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:07:29.762 [2024-12-09 06:06:24.314282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1782030 (107): Transport endpoint is not connected 01:07:29.762 [2024-12-09 06:06:24.315266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1782030 (9): Bad file descriptor 01:07:29.762 [2024-12-09 06:06:24.316263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 01:07:29.762 [2024-12-09 06:06:24.316284] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:07:29.762 [2024-12-09 06:06:24.316293] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 01:07:29.762 [2024-12-09 06:06:24.316306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 01:07:29.762 request: 01:07:29.762 { 01:07:29.762 "name": "TLSTEST", 01:07:29.762 "trtype": "tcp", 01:07:29.762 "traddr": "10.0.0.3", 01:07:29.762 "adrfam": "ipv4", 01:07:29.762 "trsvcid": "4420", 01:07:29.762 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:07:29.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:29.762 "prchk_reftag": false, 01:07:29.762 "prchk_guard": false, 01:07:29.762 "hdgst": false, 01:07:29.762 "ddgst": false, 01:07:29.762 "psk": "key0", 01:07:29.762 "allow_unrecognized_csi": false, 01:07:29.762 "method": "bdev_nvme_attach_controller", 01:07:29.762 "req_id": 1 01:07:29.762 } 01:07:29.762 Got JSON-RPC error response 01:07:29.762 response: 01:07:29.762 { 01:07:29.762 "code": -5, 01:07:29.762 "message": "Input/output error" 01:07:29.762 } 01:07:29.762 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71113 01:07:29.762 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71113 ']' 01:07:29.762 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71113 01:07:29.762 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:29.762 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:29.762 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71113 01:07:30.020 killing process with pid 71113 01:07:30.020 Received shutdown signal, test time was about 10.000000 seconds 01:07:30.020 01:07:30.020 Latency(us) 01:07:30.020 [2024-12-09T06:06:24.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:30.020 [2024-12-09T06:06:24.607Z] =================================================================================================================== 01:07:30.020 [2024-12-09T06:06:24.607Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71113' 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71113 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71113 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71136 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71136 /var/tmp/bdevperf.sock 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71136 ']' 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:30.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:30.020 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:30.020 [2024-12-09 06:06:24.586176] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:30.020 [2024-12-09 06:06:24.586585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71136 ] 01:07:30.278 [2024-12-09 06:06:24.738604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:30.278 [2024-12-09 06:06:24.777850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:30.278 [2024-12-09 06:06:24.819482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:30.898 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:30.898 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:30.898 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 01:07:31.156 [2024-12-09 06:06:25.623217] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 01:07:31.156 [2024-12-09 06:06:25.623254] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:07:31.156 request: 01:07:31.156 { 01:07:31.156 "name": "key0", 01:07:31.156 "path": "", 01:07:31.156 "method": "keyring_file_add_key", 01:07:31.156 "req_id": 1 01:07:31.156 } 01:07:31.156 Got JSON-RPC error response 01:07:31.156 response: 01:07:31.156 { 01:07:31.156 "code": -1, 01:07:31.156 "message": "Operation not permitted" 01:07:31.156 } 01:07:31.156 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:07:31.415 [2024-12-09 06:06:25.811045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:07:31.415 [2024-12-09 06:06:25.811112] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 01:07:31.415 request: 01:07:31.415 { 01:07:31.415 "name": "TLSTEST", 01:07:31.415 "trtype": "tcp", 01:07:31.415 "traddr": "10.0.0.3", 01:07:31.415 "adrfam": "ipv4", 01:07:31.415 "trsvcid": "4420", 01:07:31.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:31.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:31.415 "prchk_reftag": false, 01:07:31.415 "prchk_guard": false, 01:07:31.415 "hdgst": false, 01:07:31.415 "ddgst": false, 01:07:31.415 "psk": "key0", 01:07:31.415 "allow_unrecognized_csi": false, 01:07:31.415 "method": "bdev_nvme_attach_controller", 01:07:31.415 "req_id": 1 01:07:31.415 } 01:07:31.415 Got JSON-RPC error response 01:07:31.415 response: 01:07:31.415 { 01:07:31.415 "code": -126, 01:07:31.415 "message": "Required key not available" 01:07:31.415 } 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71136 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71136 ']' 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71136 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71136 01:07:31.415 killing process with pid 71136 01:07:31.415 Received shutdown signal, test time was about 10.000000 seconds 01:07:31.415 01:07:31.415 Latency(us) 01:07:31.415 [2024-12-09T06:06:26.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:31.415 [2024-12-09T06:06:26.002Z] =================================================================================================================== 01:07:31.415 [2024-12-09T06:06:26.002Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71136' 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71136 01:07:31.415 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71136 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 70689 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70689 ']' 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70689 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70689 01:07:31.674 killing process with pid 70689 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70689' 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70689 01:07:31.674 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70689 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zgX4aXhYc3 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zgX4aXhYc3 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:07:31.931 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71180 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71180 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71180 ']' 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:31.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:31.932 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:31.932 [2024-12-09 06:06:26.502953] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:31.932 [2024-12-09 06:06:26.503019] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:32.193 [2024-12-09 06:06:26.656958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:32.194 [2024-12-09 06:06:26.711231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:32.194 [2024-12-09 06:06:26.711269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:32.194 [2024-12-09 06:06:26.711278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:32.194 [2024-12-09 06:06:26.711287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:32.194 [2024-12-09 06:06:26.711293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:32.194 [2024-12-09 06:06:26.711645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:32.452 [2024-12-09 06:06:26.786790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:33.019 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:33.020 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:33.020 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:07:33.020 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:07:33.020 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:33.020 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:33.020 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zgX4aXhYc3 01:07:33.020 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zgX4aXhYc3 01:07:33.020 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:07:33.279 [2024-12-09 06:06:27.617584] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:33.279 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:07:33.279 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:07:33.538 [2024-12-09 06:06:28.004995] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:07:33.538 [2024-12-09 06:06:28.005227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:07:33.538 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:07:33.798 malloc0 01:07:33.798 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:07:34.057 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zgX4aXhYc3 01:07:34.057 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zgX4aXhYc3 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zgX4aXhYc3 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71230 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71230 /var/tmp/bdevperf.sock 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71230 ']' 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:34.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:34.316 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:34.316 [2024-12-09 06:06:28.876787] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:34.316 [2024-12-09 06:06:28.876852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71230 ] 01:07:34.575 [2024-12-09 06:06:29.027153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:34.575 [2024-12-09 06:06:29.067934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:34.575 [2024-12-09 06:06:29.109389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:35.543 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:35.543 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:35.543 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zgX4aXhYc3 01:07:35.543 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:07:35.543 [2024-12-09 06:06:30.092556] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:07:35.819 TLSTESTn1 01:07:35.819 06:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:07:35.819 Running I/O for 10 seconds... 01:07:37.707 5682.00 IOPS, 22.20 MiB/s [2024-12-09T06:06:33.684Z] 5710.50 IOPS, 22.31 MiB/s [2024-12-09T06:06:34.617Z] 5723.00 IOPS, 22.36 MiB/s [2024-12-09T06:06:35.552Z] 5730.75 IOPS, 22.39 MiB/s [2024-12-09T06:06:36.487Z] 5731.80 IOPS, 22.39 MiB/s [2024-12-09T06:06:37.425Z] 5736.17 IOPS, 22.41 MiB/s [2024-12-09T06:06:38.363Z] 5742.43 IOPS, 22.43 MiB/s [2024-12-09T06:06:39.301Z] 5745.12 IOPS, 22.44 MiB/s [2024-12-09T06:06:40.683Z] 5744.00 IOPS, 22.44 MiB/s [2024-12-09T06:06:40.683Z] 5743.30 IOPS, 22.43 MiB/s 01:07:46.096 Latency(us) 01:07:46.096 [2024-12-09T06:06:40.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:46.096 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:07:46.096 Verification LBA range: start 0x0 length 0x2000 01:07:46.096 TLSTESTn1 : 10.01 5749.35 22.46 0.00 0.00 22231.63 3737.39 17476.27 01:07:46.096 [2024-12-09T06:06:40.683Z] =================================================================================================================== 01:07:46.096 [2024-12-09T06:06:40.683Z] Total : 5749.35 22.46 0.00 0.00 22231.63 3737.39 17476.27 01:07:46.096 { 01:07:46.096 "results": [ 01:07:46.096 { 01:07:46.096 "job": "TLSTESTn1", 01:07:46.096 "core_mask": "0x4", 01:07:46.096 "workload": "verify", 01:07:46.096 "status": "finished", 01:07:46.096 "verify_range": { 01:07:46.096 "start": 0, 01:07:46.096 "length": 8192 01:07:46.096 }, 01:07:46.096 "queue_depth": 128, 01:07:46.096 "io_size": 4096, 01:07:46.096 "runtime": 10.011385, 01:07:46.096 "iops": 5749.35436006107, 01:07:46.096 "mibps": 22.458415468988555, 01:07:46.096 "io_failed": 0, 01:07:46.096 "io_timeout": 0, 01:07:46.096 "avg_latency_us": 22231.626140734516, 01:07:46.096 "min_latency_us": 3737.39437751004, 01:07:46.096 "max_latency_us": 17476.266666666666 01:07:46.096 } 01:07:46.096 ], 01:07:46.096 "core_count": 1 01:07:46.096 } 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71230 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71230 ']' 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71230 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71230 01:07:46.096 killing process with pid 71230 01:07:46.096 Received shutdown signal, test time was about 10.000000 seconds 01:07:46.096 01:07:46.096 Latency(us) 01:07:46.096 [2024-12-09T06:06:40.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:46.096 [2024-12-09T06:06:40.683Z] =================================================================================================================== 01:07:46.096 [2024-12-09T06:06:40.683Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71230' 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71230 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71230 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zgX4aXhYc3 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zgX4aXhYc3 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zgX4aXhYc3 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zgX4aXhYc3 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zgX4aXhYc3 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71371 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:46.096 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71371 /var/tmp/bdevperf.sock 01:07:46.097 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71371 ']' 01:07:46.097 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:46.097 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:46.097 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:46.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:46.097 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:46.097 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:46.097 [2024-12-09 06:06:40.582362] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:46.097 [2024-12-09 06:06:40.582553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71371 ] 01:07:46.356 [2024-12-09 06:06:40.731319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:46.356 [2024-12-09 06:06:40.771285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:46.356 [2024-12-09 06:06:40.812070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:46.943 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:46.943 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:46.943 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zgX4aXhYc3 01:07:47.202 [2024-12-09 06:06:41.603122] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zgX4aXhYc3': 0100666 01:07:47.202 [2024-12-09 06:06:41.603156] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:07:47.202 request: 01:07:47.202 { 01:07:47.202 "name": "key0", 01:07:47.202 "path": "/tmp/tmp.zgX4aXhYc3", 01:07:47.202 "method": "keyring_file_add_key", 01:07:47.202 "req_id": 1 01:07:47.202 } 01:07:47.202 Got JSON-RPC error response 01:07:47.202 response: 01:07:47.202 { 01:07:47.202 "code": -1, 01:07:47.202 "message": "Operation not permitted" 01:07:47.202 } 01:07:47.202 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:07:47.202 [2024-12-09 06:06:41.778959] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:07:47.202 [2024-12-09 06:06:41.779187] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 01:07:47.202 request: 01:07:47.202 { 01:07:47.202 "name": "TLSTEST", 01:07:47.202 "trtype": "tcp", 01:07:47.202 "traddr": "10.0.0.3", 01:07:47.202 "adrfam": "ipv4", 01:07:47.202 "trsvcid": "4420", 01:07:47.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:47.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:47.202 "prchk_reftag": false, 01:07:47.202 "prchk_guard": false, 01:07:47.202 "hdgst": false, 01:07:47.202 "ddgst": false, 01:07:47.202 "psk": "key0", 01:07:47.202 "allow_unrecognized_csi": false, 01:07:47.202 "method": "bdev_nvme_attach_controller", 01:07:47.202 "req_id": 1 01:07:47.202 } 01:07:47.202 Got JSON-RPC error response 01:07:47.202 response: 01:07:47.202 { 01:07:47.202 "code": -126, 01:07:47.202 "message": "Required key not available" 01:07:47.202 } 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71371 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71371 ']' 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71371 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71371 01:07:47.462 killing process with pid 71371 01:07:47.462 Received shutdown signal, test time was about 10.000000 seconds 01:07:47.462 01:07:47.462 Latency(us) 01:07:47.462 [2024-12-09T06:06:42.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:47.462 [2024-12-09T06:06:42.049Z] =================================================================================================================== 01:07:47.462 [2024-12-09T06:06:42.049Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71371' 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71371 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71371 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71180 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71180 ']' 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71180 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:47.462 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:47.462 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71180 01:07:47.462 killing process with pid 71180 01:07:47.462 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:07:47.462 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:07:47.462 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71180' 01:07:47.462 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71180 01:07:47.462 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71180 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71399 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71399 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71399 ']' 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:48.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:48.029 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:48.029 [2024-12-09 06:06:42.405464] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:48.029 [2024-12-09 06:06:42.405525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:48.029 [2024-12-09 06:06:42.557903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:48.029 [2024-12-09 06:06:42.613415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:48.029 [2024-12-09 06:06:42.613462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:48.029 [2024-12-09 06:06:42.613472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:48.029 [2024-12-09 06:06:42.613480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:48.029 [2024-12-09 06:06:42.613488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:48.029 [2024-12-09 06:06:42.613856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:48.288 [2024-12-09 06:06:42.690322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zgX4aXhYc3 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zgX4aXhYc3 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.zgX4aXhYc3 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zgX4aXhYc3 01:07:48.854 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:07:49.113 [2024-12-09 06:06:43.503284] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:49.113 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:07:49.371 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:07:49.371 [2024-12-09 06:06:43.874752] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:07:49.371 [2024-12-09 06:06:43.874977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:07:49.371 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:07:49.629 malloc0 01:07:49.629 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:07:49.887 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zgX4aXhYc3 01:07:49.887 [2024-12-09 06:06:44.451807] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zgX4aXhYc3': 0100666 01:07:49.887 [2024-12-09 06:06:44.451846] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:07:49.887 request: 01:07:49.887 { 01:07:49.887 "name": "key0", 01:07:49.887 "path": "/tmp/tmp.zgX4aXhYc3", 01:07:49.887 "method": "keyring_file_add_key", 01:07:49.887 "req_id": 1 01:07:49.887 } 01:07:49.887 Got JSON-RPC error response 01:07:49.887 response: 01:07:49.887 { 01:07:49.887 "code": -1, 01:07:49.887 "message": "Operation not permitted" 01:07:49.887 } 01:07:50.145 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:07:50.145 [2024-12-09 06:06:44.663520] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 01:07:50.145 [2024-12-09 06:06:44.663764] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 01:07:50.145 request: 01:07:50.145 { 01:07:50.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:07:50.145 "host": "nqn.2016-06.io.spdk:host1", 01:07:50.145 "psk": "key0", 01:07:50.145 "method": "nvmf_subsystem_add_host", 01:07:50.145 "req_id": 1 01:07:50.145 } 01:07:50.145 Got JSON-RPC error response 01:07:50.146 response: 01:07:50.146 { 01:07:50.146 "code": -32603, 01:07:50.146 "message": "Internal error" 01:07:50.146 } 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71399 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71399 ']' 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71399 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:50.146 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71399 01:07:50.403 killing process with pid 71399 01:07:50.403 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:07:50.403 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:07:50.403 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71399' 01:07:50.403 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71399 01:07:50.403 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71399 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zgX4aXhYc3 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71468 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:07:50.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71468 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71468 ']' 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:50.666 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:50.666 [2024-12-09 06:06:45.097834] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:50.666 [2024-12-09 06:06:45.098125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:50.666 [2024-12-09 06:06:45.247868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:50.924 [2024-12-09 06:06:45.303623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:50.924 [2024-12-09 06:06:45.303663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:50.924 [2024-12-09 06:06:45.303673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:50.924 [2024-12-09 06:06:45.303680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:50.924 [2024-12-09 06:06:45.303687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:50.924 [2024-12-09 06:06:45.304038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:50.924 [2024-12-09 06:06:45.379902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:51.492 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:51.492 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:51.492 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:07:51.492 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:07:51.492 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:51.492 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:51.492 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zgX4aXhYc3 01:07:51.492 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zgX4aXhYc3 01:07:51.492 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:07:51.752 [2024-12-09 06:06:46.185850] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:51.752 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:07:52.012 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:07:52.012 [2024-12-09 06:06:46.585309] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:07:52.012 [2024-12-09 06:06:46.585555] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:07:52.272 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:07:52.272 malloc0 01:07:52.272 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:07:52.531 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zgX4aXhYc3 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71521 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71521 /var/tmp/bdevperf.sock 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71521 ']' 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:52.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:52.791 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:53.050 [2024-12-09 06:06:47.414648] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:53.050 [2024-12-09 06:06:47.414730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71521 ] 01:07:53.051 [2024-12-09 06:06:47.566580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:53.051 [2024-12-09 06:06:47.605507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:53.310 [2024-12-09 06:06:47.647092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:53.879 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:53.879 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:53.879 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zgX4aXhYc3 01:07:53.879 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:07:54.137 [2024-12-09 06:06:48.606870] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:07:54.137 TLSTESTn1 01:07:54.137 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:07:54.397 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 01:07:54.397 "subsystems": [ 01:07:54.397 { 01:07:54.397 "subsystem": "keyring", 01:07:54.397 "config": [ 01:07:54.397 { 01:07:54.397 "method": "keyring_file_add_key", 01:07:54.397 "params": { 01:07:54.397 "name": "key0", 01:07:54.397 "path": "/tmp/tmp.zgX4aXhYc3" 01:07:54.397 } 01:07:54.397 } 01:07:54.397 ] 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "subsystem": "iobuf", 01:07:54.397 "config": [ 01:07:54.397 { 01:07:54.397 "method": "iobuf_set_options", 01:07:54.397 "params": { 01:07:54.397 "small_pool_count": 8192, 01:07:54.397 "large_pool_count": 1024, 01:07:54.397 "small_bufsize": 8192, 01:07:54.397 "large_bufsize": 135168, 01:07:54.397 "enable_numa": false 01:07:54.397 } 01:07:54.397 } 01:07:54.397 ] 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "subsystem": "sock", 01:07:54.397 "config": [ 01:07:54.397 { 01:07:54.397 "method": "sock_set_default_impl", 01:07:54.397 "params": { 01:07:54.397 "impl_name": "uring" 01:07:54.397 } 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "method": "sock_impl_set_options", 01:07:54.397 "params": { 01:07:54.397 "impl_name": "ssl", 01:07:54.397 "recv_buf_size": 4096, 01:07:54.397 "send_buf_size": 4096, 01:07:54.397 "enable_recv_pipe": true, 01:07:54.397 "enable_quickack": false, 01:07:54.397 "enable_placement_id": 0, 01:07:54.397 "enable_zerocopy_send_server": true, 01:07:54.397 "enable_zerocopy_send_client": false, 01:07:54.397 "zerocopy_threshold": 0, 01:07:54.397 "tls_version": 0, 01:07:54.397 "enable_ktls": false 01:07:54.397 } 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "method": "sock_impl_set_options", 01:07:54.397 "params": { 01:07:54.397 "impl_name": "posix", 01:07:54.397 "recv_buf_size": 2097152, 01:07:54.397 "send_buf_size": 2097152, 01:07:54.397 "enable_recv_pipe": true, 01:07:54.397 "enable_quickack": false, 01:07:54.397 "enable_placement_id": 0, 01:07:54.397 "enable_zerocopy_send_server": true, 01:07:54.397 "enable_zerocopy_send_client": false, 01:07:54.397 "zerocopy_threshold": 0, 01:07:54.397 "tls_version": 0, 01:07:54.397 "enable_ktls": false 01:07:54.397 } 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "method": "sock_impl_set_options", 01:07:54.397 "params": { 01:07:54.397 "impl_name": "uring", 01:07:54.397 "recv_buf_size": 2097152, 01:07:54.397 "send_buf_size": 2097152, 01:07:54.397 "enable_recv_pipe": true, 01:07:54.397 "enable_quickack": false, 01:07:54.397 "enable_placement_id": 0, 01:07:54.397 "enable_zerocopy_send_server": false, 01:07:54.397 "enable_zerocopy_send_client": false, 01:07:54.397 "zerocopy_threshold": 0, 01:07:54.397 "tls_version": 0, 01:07:54.397 "enable_ktls": false 01:07:54.397 } 01:07:54.397 } 01:07:54.397 ] 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "subsystem": "vmd", 01:07:54.397 "config": [] 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "subsystem": "accel", 01:07:54.397 "config": [ 01:07:54.397 { 01:07:54.397 "method": "accel_set_options", 01:07:54.397 "params": { 01:07:54.397 "small_cache_size": 128, 01:07:54.397 "large_cache_size": 16, 01:07:54.397 "task_count": 2048, 01:07:54.397 "sequence_count": 2048, 01:07:54.397 "buf_count": 2048 01:07:54.397 } 01:07:54.397 } 01:07:54.397 ] 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "subsystem": "bdev", 01:07:54.397 "config": [ 01:07:54.397 { 01:07:54.397 "method": "bdev_set_options", 01:07:54.397 "params": { 01:07:54.397 "bdev_io_pool_size": 65535, 01:07:54.397 "bdev_io_cache_size": 256, 01:07:54.397 "bdev_auto_examine": true, 01:07:54.397 "iobuf_small_cache_size": 128, 01:07:54.397 "iobuf_large_cache_size": 16 01:07:54.397 } 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "method": "bdev_raid_set_options", 01:07:54.397 "params": { 01:07:54.397 "process_window_size_kb": 1024, 01:07:54.397 "process_max_bandwidth_mb_sec": 0 01:07:54.397 } 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "method": "bdev_iscsi_set_options", 01:07:54.397 "params": { 01:07:54.397 "timeout_sec": 30 01:07:54.397 } 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "method": "bdev_nvme_set_options", 01:07:54.397 "params": { 01:07:54.397 "action_on_timeout": "none", 01:07:54.397 "timeout_us": 0, 01:07:54.397 "timeout_admin_us": 0, 01:07:54.397 "keep_alive_timeout_ms": 10000, 01:07:54.397 "arbitration_burst": 0, 01:07:54.397 "low_priority_weight": 0, 01:07:54.397 "medium_priority_weight": 0, 01:07:54.397 "high_priority_weight": 0, 01:07:54.397 "nvme_adminq_poll_period_us": 10000, 01:07:54.397 "nvme_ioq_poll_period_us": 0, 01:07:54.397 "io_queue_requests": 0, 01:07:54.397 "delay_cmd_submit": true, 01:07:54.397 "transport_retry_count": 4, 01:07:54.397 "bdev_retry_count": 3, 01:07:54.397 "transport_ack_timeout": 0, 01:07:54.397 "ctrlr_loss_timeout_sec": 0, 01:07:54.397 "reconnect_delay_sec": 0, 01:07:54.397 "fast_io_fail_timeout_sec": 0, 01:07:54.397 "disable_auto_failback": false, 01:07:54.397 "generate_uuids": false, 01:07:54.397 "transport_tos": 0, 01:07:54.397 "nvme_error_stat": false, 01:07:54.397 "rdma_srq_size": 0, 01:07:54.397 "io_path_stat": false, 01:07:54.397 "allow_accel_sequence": false, 01:07:54.397 "rdma_max_cq_size": 0, 01:07:54.397 "rdma_cm_event_timeout_ms": 0, 01:07:54.397 "dhchap_digests": [ 01:07:54.397 "sha256", 01:07:54.397 "sha384", 01:07:54.397 "sha512" 01:07:54.397 ], 01:07:54.397 "dhchap_dhgroups": [ 01:07:54.397 "null", 01:07:54.397 "ffdhe2048", 01:07:54.397 "ffdhe3072", 01:07:54.397 "ffdhe4096", 01:07:54.397 "ffdhe6144", 01:07:54.397 "ffdhe8192" 01:07:54.397 ] 01:07:54.397 } 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "method": "bdev_nvme_set_hotplug", 01:07:54.397 "params": { 01:07:54.397 "period_us": 100000, 01:07:54.397 "enable": false 01:07:54.397 } 01:07:54.397 }, 01:07:54.397 { 01:07:54.397 "method": "bdev_malloc_create", 01:07:54.397 "params": { 01:07:54.398 "name": "malloc0", 01:07:54.398 "num_blocks": 8192, 01:07:54.398 "block_size": 4096, 01:07:54.398 "physical_block_size": 4096, 01:07:54.398 "uuid": "02dcac9a-9fb9-4a6c-8a79-002212decf11", 01:07:54.398 "optimal_io_boundary": 0, 01:07:54.398 "md_size": 0, 01:07:54.398 "dif_type": 0, 01:07:54.398 "dif_is_head_of_md": false, 01:07:54.398 "dif_pi_format": 0 01:07:54.398 } 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "method": "bdev_wait_for_examine" 01:07:54.398 } 01:07:54.398 ] 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "subsystem": "nbd", 01:07:54.398 "config": [] 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "subsystem": "scheduler", 01:07:54.398 "config": [ 01:07:54.398 { 01:07:54.398 "method": "framework_set_scheduler", 01:07:54.398 "params": { 01:07:54.398 "name": "static" 01:07:54.398 } 01:07:54.398 } 01:07:54.398 ] 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "subsystem": "nvmf", 01:07:54.398 "config": [ 01:07:54.398 { 01:07:54.398 "method": "nvmf_set_config", 01:07:54.398 "params": { 01:07:54.398 "discovery_filter": "match_any", 01:07:54.398 "admin_cmd_passthru": { 01:07:54.398 "identify_ctrlr": false 01:07:54.398 }, 01:07:54.398 "dhchap_digests": [ 01:07:54.398 "sha256", 01:07:54.398 "sha384", 01:07:54.398 "sha512" 01:07:54.398 ], 01:07:54.398 "dhchap_dhgroups": [ 01:07:54.398 "null", 01:07:54.398 "ffdhe2048", 01:07:54.398 "ffdhe3072", 01:07:54.398 "ffdhe4096", 01:07:54.398 "ffdhe6144", 01:07:54.398 "ffdhe8192" 01:07:54.398 ] 01:07:54.398 } 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "method": "nvmf_set_max_subsystems", 01:07:54.398 "params": { 01:07:54.398 "max_subsystems": 1024 01:07:54.398 } 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "method": "nvmf_set_crdt", 01:07:54.398 "params": { 01:07:54.398 "crdt1": 0, 01:07:54.398 "crdt2": 0, 01:07:54.398 "crdt3": 0 01:07:54.398 } 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "method": "nvmf_create_transport", 01:07:54.398 "params": { 01:07:54.398 "trtype": "TCP", 01:07:54.398 "max_queue_depth": 128, 01:07:54.398 "max_io_qpairs_per_ctrlr": 127, 01:07:54.398 "in_capsule_data_size": 4096, 01:07:54.398 "max_io_size": 131072, 01:07:54.398 "io_unit_size": 131072, 01:07:54.398 "max_aq_depth": 128, 01:07:54.398 "num_shared_buffers": 511, 01:07:54.398 "buf_cache_size": 4294967295, 01:07:54.398 "dif_insert_or_strip": false, 01:07:54.398 "zcopy": false, 01:07:54.398 "c2h_success": false, 01:07:54.398 "sock_priority": 0, 01:07:54.398 "abort_timeout_sec": 1, 01:07:54.398 "ack_timeout": 0, 01:07:54.398 "data_wr_pool_size": 0 01:07:54.398 } 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "method": "nvmf_create_subsystem", 01:07:54.398 "params": { 01:07:54.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:07:54.398 "allow_any_host": false, 01:07:54.398 "serial_number": "SPDK00000000000001", 01:07:54.398 "model_number": "SPDK bdev Controller", 01:07:54.398 "max_namespaces": 10, 01:07:54.398 "min_cntlid": 1, 01:07:54.398 "max_cntlid": 65519, 01:07:54.398 "ana_reporting": false 01:07:54.398 } 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "method": "nvmf_subsystem_add_host", 01:07:54.398 "params": { 01:07:54.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:07:54.398 "host": "nqn.2016-06.io.spdk:host1", 01:07:54.398 "psk": "key0" 01:07:54.398 } 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "method": "nvmf_subsystem_add_ns", 01:07:54.398 "params": { 01:07:54.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:07:54.398 "namespace": { 01:07:54.398 "nsid": 1, 01:07:54.398 "bdev_name": "malloc0", 01:07:54.398 "nguid": "02DCAC9A9FB94A6C8A79002212DECF11", 01:07:54.398 "uuid": "02dcac9a-9fb9-4a6c-8a79-002212decf11", 01:07:54.398 "no_auto_visible": false 01:07:54.398 } 01:07:54.398 } 01:07:54.398 }, 01:07:54.398 { 01:07:54.398 "method": "nvmf_subsystem_add_listener", 01:07:54.398 "params": { 01:07:54.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:07:54.398 "listen_address": { 01:07:54.398 "trtype": "TCP", 01:07:54.398 "adrfam": "IPv4", 01:07:54.398 "traddr": "10.0.0.3", 01:07:54.398 "trsvcid": "4420" 01:07:54.398 }, 01:07:54.398 "secure_channel": true 01:07:54.398 } 01:07:54.398 } 01:07:54.398 ] 01:07:54.398 } 01:07:54.398 ] 01:07:54.398 }' 01:07:54.657 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:07:54.917 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 01:07:54.917 "subsystems": [ 01:07:54.917 { 01:07:54.917 "subsystem": "keyring", 01:07:54.917 "config": [ 01:07:54.917 { 01:07:54.917 "method": "keyring_file_add_key", 01:07:54.917 "params": { 01:07:54.917 "name": "key0", 01:07:54.917 "path": "/tmp/tmp.zgX4aXhYc3" 01:07:54.917 } 01:07:54.917 } 01:07:54.917 ] 01:07:54.917 }, 01:07:54.917 { 01:07:54.917 "subsystem": "iobuf", 01:07:54.917 "config": [ 01:07:54.917 { 01:07:54.917 "method": "iobuf_set_options", 01:07:54.917 "params": { 01:07:54.917 "small_pool_count": 8192, 01:07:54.917 "large_pool_count": 1024, 01:07:54.917 "small_bufsize": 8192, 01:07:54.917 "large_bufsize": 135168, 01:07:54.917 "enable_numa": false 01:07:54.917 } 01:07:54.917 } 01:07:54.917 ] 01:07:54.917 }, 01:07:54.917 { 01:07:54.917 "subsystem": "sock", 01:07:54.917 "config": [ 01:07:54.917 { 01:07:54.917 "method": "sock_set_default_impl", 01:07:54.917 "params": { 01:07:54.917 "impl_name": "uring" 01:07:54.917 } 01:07:54.917 }, 01:07:54.917 { 01:07:54.917 "method": "sock_impl_set_options", 01:07:54.917 "params": { 01:07:54.917 "impl_name": "ssl", 01:07:54.917 "recv_buf_size": 4096, 01:07:54.917 "send_buf_size": 4096, 01:07:54.917 "enable_recv_pipe": true, 01:07:54.917 "enable_quickack": false, 01:07:54.917 "enable_placement_id": 0, 01:07:54.917 "enable_zerocopy_send_server": true, 01:07:54.917 "enable_zerocopy_send_client": false, 01:07:54.917 "zerocopy_threshold": 0, 01:07:54.917 "tls_version": 0, 01:07:54.917 "enable_ktls": false 01:07:54.917 } 01:07:54.917 }, 01:07:54.917 { 01:07:54.917 "method": "sock_impl_set_options", 01:07:54.917 "params": { 01:07:54.917 "impl_name": "posix", 01:07:54.917 "recv_buf_size": 2097152, 01:07:54.917 "send_buf_size": 2097152, 01:07:54.917 "enable_recv_pipe": true, 01:07:54.917 "enable_quickack": false, 01:07:54.917 "enable_placement_id": 0, 01:07:54.917 "enable_zerocopy_send_server": true, 01:07:54.917 "enable_zerocopy_send_client": false, 01:07:54.917 "zerocopy_threshold": 0, 01:07:54.917 "tls_version": 0, 01:07:54.917 "enable_ktls": false 01:07:54.917 } 01:07:54.917 }, 01:07:54.917 { 01:07:54.918 "method": "sock_impl_set_options", 01:07:54.918 "params": { 01:07:54.918 "impl_name": "uring", 01:07:54.918 "recv_buf_size": 2097152, 01:07:54.918 "send_buf_size": 2097152, 01:07:54.918 "enable_recv_pipe": true, 01:07:54.918 "enable_quickack": false, 01:07:54.918 "enable_placement_id": 0, 01:07:54.918 "enable_zerocopy_send_server": false, 01:07:54.918 "enable_zerocopy_send_client": false, 01:07:54.918 "zerocopy_threshold": 0, 01:07:54.918 "tls_version": 0, 01:07:54.918 "enable_ktls": false 01:07:54.918 } 01:07:54.918 } 01:07:54.918 ] 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "subsystem": "vmd", 01:07:54.918 "config": [] 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "subsystem": "accel", 01:07:54.918 "config": [ 01:07:54.918 { 01:07:54.918 "method": "accel_set_options", 01:07:54.918 "params": { 01:07:54.918 "small_cache_size": 128, 01:07:54.918 "large_cache_size": 16, 01:07:54.918 "task_count": 2048, 01:07:54.918 "sequence_count": 2048, 01:07:54.918 "buf_count": 2048 01:07:54.918 } 01:07:54.918 } 01:07:54.918 ] 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "subsystem": "bdev", 01:07:54.918 "config": [ 01:07:54.918 { 01:07:54.918 "method": "bdev_set_options", 01:07:54.918 "params": { 01:07:54.918 "bdev_io_pool_size": 65535, 01:07:54.918 "bdev_io_cache_size": 256, 01:07:54.918 "bdev_auto_examine": true, 01:07:54.918 "iobuf_small_cache_size": 128, 01:07:54.918 "iobuf_large_cache_size": 16 01:07:54.918 } 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "method": "bdev_raid_set_options", 01:07:54.918 "params": { 01:07:54.918 "process_window_size_kb": 1024, 01:07:54.918 "process_max_bandwidth_mb_sec": 0 01:07:54.918 } 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "method": "bdev_iscsi_set_options", 01:07:54.918 "params": { 01:07:54.918 "timeout_sec": 30 01:07:54.918 } 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "method": "bdev_nvme_set_options", 01:07:54.918 "params": { 01:07:54.918 "action_on_timeout": "none", 01:07:54.918 "timeout_us": 0, 01:07:54.918 "timeout_admin_us": 0, 01:07:54.918 "keep_alive_timeout_ms": 10000, 01:07:54.918 "arbitration_burst": 0, 01:07:54.918 "low_priority_weight": 0, 01:07:54.918 "medium_priority_weight": 0, 01:07:54.918 "high_priority_weight": 0, 01:07:54.918 "nvme_adminq_poll_period_us": 10000, 01:07:54.918 "nvme_ioq_poll_period_us": 0, 01:07:54.918 "io_queue_requests": 512, 01:07:54.918 "delay_cmd_submit": true, 01:07:54.918 "transport_retry_count": 4, 01:07:54.918 "bdev_retry_count": 3, 01:07:54.918 "transport_ack_timeout": 0, 01:07:54.918 "ctrlr_loss_timeout_sec": 0, 01:07:54.918 "reconnect_delay_sec": 0, 01:07:54.918 "fast_io_fail_timeout_sec": 0, 01:07:54.918 "disable_auto_failback": false, 01:07:54.918 "generate_uuids": false, 01:07:54.918 "transport_tos": 0, 01:07:54.918 "nvme_error_stat": false, 01:07:54.918 "rdma_srq_size": 0, 01:07:54.918 "io_path_stat": false, 01:07:54.918 "allow_accel_sequence": false, 01:07:54.918 "rdma_max_cq_size": 0, 01:07:54.918 "rdma_cm_event_timeout_ms": 0, 01:07:54.918 "dhchap_digests": [ 01:07:54.918 "sha256", 01:07:54.918 "sha384", 01:07:54.918 "sha512" 01:07:54.918 ], 01:07:54.918 "dhchap_dhgroups": [ 01:07:54.918 "null", 01:07:54.918 "ffdhe2048", 01:07:54.918 "ffdhe3072", 01:07:54.918 "ffdhe4096", 01:07:54.918 "ffdhe6144", 01:07:54.918 "ffdhe8192" 01:07:54.918 ] 01:07:54.918 } 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "method": "bdev_nvme_attach_controller", 01:07:54.918 "params": { 01:07:54.918 "name": "TLSTEST", 01:07:54.918 "trtype": "TCP", 01:07:54.918 "adrfam": "IPv4", 01:07:54.918 "traddr": "10.0.0.3", 01:07:54.918 "trsvcid": "4420", 01:07:54.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:54.918 "prchk_reftag": false, 01:07:54.918 "prchk_guard": false, 01:07:54.918 "ctrlr_loss_timeout_sec": 0, 01:07:54.918 "reconnect_delay_sec": 0, 01:07:54.918 "fast_io_fail_timeout_sec": 0, 01:07:54.918 "psk": "key0", 01:07:54.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:54.918 "hdgst": false, 01:07:54.918 "ddgst": false, 01:07:54.918 "multipath": "multipath" 01:07:54.918 } 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "method": "bdev_nvme_set_hotplug", 01:07:54.918 "params": { 01:07:54.918 "period_us": 100000, 01:07:54.918 "enable": false 01:07:54.918 } 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "method": "bdev_wait_for_examine" 01:07:54.918 } 01:07:54.918 ] 01:07:54.918 }, 01:07:54.918 { 01:07:54.918 "subsystem": "nbd", 01:07:54.918 "config": [] 01:07:54.918 } 01:07:54.918 ] 01:07:54.918 }' 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71521 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71521 ']' 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71521 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71521 01:07:54.918 killing process with pid 71521 01:07:54.918 Received shutdown signal, test time was about 10.000000 seconds 01:07:54.918 01:07:54.918 Latency(us) 01:07:54.918 [2024-12-09T06:06:49.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:07:54.918 [2024-12-09T06:06:49.505Z] =================================================================================================================== 01:07:54.918 [2024-12-09T06:06:49.505Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71521' 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71521 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71521 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71468 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71468 ']' 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71468 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:54.918 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71468 01:07:55.178 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:07:55.178 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:07:55.178 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71468' 01:07:55.178 killing process with pid 71468 01:07:55.178 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71468 01:07:55.178 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71468 01:07:55.438 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 01:07:55.438 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:07:55.438 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:07:55.438 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 01:07:55.438 "subsystems": [ 01:07:55.438 { 01:07:55.438 "subsystem": "keyring", 01:07:55.438 "config": [ 01:07:55.438 { 01:07:55.438 "method": "keyring_file_add_key", 01:07:55.438 "params": { 01:07:55.438 "name": "key0", 01:07:55.438 "path": "/tmp/tmp.zgX4aXhYc3" 01:07:55.438 } 01:07:55.438 } 01:07:55.438 ] 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "subsystem": "iobuf", 01:07:55.438 "config": [ 01:07:55.438 { 01:07:55.438 "method": "iobuf_set_options", 01:07:55.438 "params": { 01:07:55.438 "small_pool_count": 8192, 01:07:55.438 "large_pool_count": 1024, 01:07:55.438 "small_bufsize": 8192, 01:07:55.438 "large_bufsize": 135168, 01:07:55.438 "enable_numa": false 01:07:55.438 } 01:07:55.438 } 01:07:55.438 ] 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "subsystem": "sock", 01:07:55.438 "config": [ 01:07:55.438 { 01:07:55.438 "method": "sock_set_default_impl", 01:07:55.438 "params": { 01:07:55.438 "impl_name": "uring" 01:07:55.438 } 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "method": "sock_impl_set_options", 01:07:55.438 "params": { 01:07:55.438 "impl_name": "ssl", 01:07:55.438 "recv_buf_size": 4096, 01:07:55.438 "send_buf_size": 4096, 01:07:55.438 "enable_recv_pipe": true, 01:07:55.438 "enable_quickack": false, 01:07:55.438 "enable_placement_id": 0, 01:07:55.438 "enable_zerocopy_send_server": true, 01:07:55.438 "enable_zerocopy_send_client": false, 01:07:55.438 "zerocopy_threshold": 0, 01:07:55.438 "tls_version": 0, 01:07:55.438 "enable_ktls": false 01:07:55.438 } 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "method": "sock_impl_set_options", 01:07:55.438 "params": { 01:07:55.438 "impl_name": "posix", 01:07:55.438 "recv_buf_size": 2097152, 01:07:55.438 "send_buf_size": 2097152, 01:07:55.438 "enable_recv_pipe": true, 01:07:55.438 "enable_quickack": false, 01:07:55.438 "enable_placement_id": 0, 01:07:55.438 "enable_zerocopy_send_server": true, 01:07:55.438 "enable_zerocopy_send_client": false, 01:07:55.438 "zerocopy_threshold": 0, 01:07:55.438 "tls_version": 0, 01:07:55.438 "enable_ktls": false 01:07:55.438 } 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "method": "sock_impl_set_options", 01:07:55.438 "params": { 01:07:55.438 "impl_name": "uring", 01:07:55.438 "recv_buf_size": 2097152, 01:07:55.438 "send_buf_size": 2097152, 01:07:55.438 "enable_recv_pipe": true, 01:07:55.438 "enable_quickack": false, 01:07:55.438 "enable_placement_id": 0, 01:07:55.438 "enable_zerocopy_send_server": false, 01:07:55.438 "enable_zerocopy_send_client": false, 01:07:55.438 "zerocopy_threshold": 0, 01:07:55.438 "tls_version": 0, 01:07:55.438 "enable_ktls": false 01:07:55.438 } 01:07:55.438 } 01:07:55.438 ] 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "subsystem": "vmd", 01:07:55.438 "config": [] 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "subsystem": "accel", 01:07:55.438 "config": [ 01:07:55.438 { 01:07:55.438 "method": "accel_set_options", 01:07:55.438 "params": { 01:07:55.438 "small_cache_size": 128, 01:07:55.438 "large_cache_size": 16, 01:07:55.438 "task_count": 2048, 01:07:55.438 "sequence_count": 2048, 01:07:55.438 "buf_count": 2048 01:07:55.438 } 01:07:55.438 } 01:07:55.438 ] 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "subsystem": "bdev", 01:07:55.438 "config": [ 01:07:55.438 { 01:07:55.438 "method": "bdev_set_options", 01:07:55.438 "params": { 01:07:55.438 "bdev_io_pool_size": 65535, 01:07:55.438 "bdev_io_cache_size": 256, 01:07:55.438 "bdev_auto_examine": true, 01:07:55.438 "iobuf_small_cache_size": 128, 01:07:55.438 "iobuf_large_cache_size": 16 01:07:55.438 } 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "method": "bdev_raid_set_options", 01:07:55.438 "params": { 01:07:55.438 "process_window_size_kb": 1024, 01:07:55.438 "process_max_bandwidth_mb_sec": 0 01:07:55.438 } 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "method": "bdev_iscsi_set_options", 01:07:55.438 "params": { 01:07:55.438 "timeout_sec": 30 01:07:55.438 } 01:07:55.438 }, 01:07:55.438 { 01:07:55.438 "method": "bdev_nvme_set_options", 01:07:55.438 "params": { 01:07:55.438 "action_on_timeout": "none", 01:07:55.438 "timeout_us": 0, 01:07:55.438 "timeout_admin_us": 0, 01:07:55.438 "keep_alive_timeout_ms": 10000, 01:07:55.438 "arbitration_burst": 0, 01:07:55.438 "low_priority_weight": 0, 01:07:55.438 "medium_priority_weight": 0, 01:07:55.438 "high_priority_weight": 0, 01:07:55.438 "nvme_adminq_poll_period_us": 10000, 01:07:55.438 "nvme_ioq_poll_period_us": 0, 01:07:55.438 "io_queue_requests": 0, 01:07:55.438 "delay_cmd_submit": true, 01:07:55.438 "transport_retry_count": 4, 01:07:55.438 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:55.438 "bdev_retry_count": 3, 01:07:55.438 "transport_ack_timeout": 0, 01:07:55.438 "ctrlr_loss_timeout_sec": 0, 01:07:55.438 "reconnect_delay_sec": 0, 01:07:55.438 "fast_io_fail_timeout_sec": 0, 01:07:55.439 "disable_auto_failback": false, 01:07:55.439 "generate_uuids": false, 01:07:55.439 "transport_tos": 0, 01:07:55.439 "nvme_error_stat": false, 01:07:55.439 "rdma_srq_size": 0, 01:07:55.439 "io_path_stat": false, 01:07:55.439 "allow_accel_sequence": false, 01:07:55.439 "rdma_max_cq_size": 0, 01:07:55.439 "rdma_cm_event_timeout_ms": 0, 01:07:55.439 "dhchap_digests": [ 01:07:55.439 "sha256", 01:07:55.439 "sha384", 01:07:55.439 "sha512" 01:07:55.439 ], 01:07:55.439 "dhchap_dhgroups": [ 01:07:55.439 "null", 01:07:55.439 "ffdhe2048", 01:07:55.439 "ffdhe3072", 01:07:55.439 "ffdhe4096", 01:07:55.439 "ffdhe6144", 01:07:55.439 "ffdhe8192" 01:07:55.439 ] 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "bdev_nvme_set_hotplug", 01:07:55.439 "params": { 01:07:55.439 "period_us": 100000, 01:07:55.439 "enable": false 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "bdev_malloc_create", 01:07:55.439 "params": { 01:07:55.439 "name": "malloc0", 01:07:55.439 "num_blocks": 8192, 01:07:55.439 "block_size": 4096, 01:07:55.439 "physical_block_size": 4096, 01:07:55.439 "uuid": "02dcac9a-9fb9-4a6c-8a79-002212decf11", 01:07:55.439 "optimal_io_boundary": 0, 01:07:55.439 "md_size": 0, 01:07:55.439 "dif_type": 0, 01:07:55.439 "dif_is_head_of_md": false, 01:07:55.439 "dif_pi_format": 0 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "bdev_wait_for_examine" 01:07:55.439 } 01:07:55.439 ] 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "subsystem": "nbd", 01:07:55.439 "config": [] 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "subsystem": "scheduler", 01:07:55.439 "config": [ 01:07:55.439 { 01:07:55.439 "method": "framework_set_scheduler", 01:07:55.439 "params": { 01:07:55.439 "name": "static" 01:07:55.439 } 01:07:55.439 } 01:07:55.439 ] 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "subsystem": "nvmf", 01:07:55.439 "config": [ 01:07:55.439 { 01:07:55.439 "method": "nvmf_set_config", 01:07:55.439 "params": { 01:07:55.439 "discovery_filter": "match_any", 01:07:55.439 "admin_cmd_passthru": { 01:07:55.439 "identify_ctrlr": false 01:07:55.439 }, 01:07:55.439 "dhchap_digests": [ 01:07:55.439 "sha256", 01:07:55.439 "sha384", 01:07:55.439 "sha512" 01:07:55.439 ], 01:07:55.439 "dhchap_dhgroups": [ 01:07:55.439 "null", 01:07:55.439 "ffdhe2048", 01:07:55.439 "ffdhe3072", 01:07:55.439 "ffdhe4096", 01:07:55.439 "ffdhe6144", 01:07:55.439 "ffdhe8192" 01:07:55.439 ] 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "nvmf_set_max_subsystems", 01:07:55.439 "params": { 01:07:55.439 "max_subsystems": 1024 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "nvmf_set_crdt", 01:07:55.439 "params": { 01:07:55.439 "crdt1": 0, 01:07:55.439 "crdt2": 0, 01:07:55.439 "crdt3": 0 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "nvmf_create_transport", 01:07:55.439 "params": { 01:07:55.439 "trtype": "TCP", 01:07:55.439 "max_queue_depth": 128, 01:07:55.439 "max_io_qpairs_per_ctrlr": 127, 01:07:55.439 "in_capsule_data_size": 4096, 01:07:55.439 "max_io_size": 131072, 01:07:55.439 "io_unit_size": 131072, 01:07:55.439 "max_aq_depth": 128, 01:07:55.439 "num_shared_buffers": 511, 01:07:55.439 "buf_cache_size": 4294967295, 01:07:55.439 "dif_insert_or_strip": false, 01:07:55.439 "zcopy": false, 01:07:55.439 "c2h_success": false, 01:07:55.439 "sock_priority": 0, 01:07:55.439 "abort_timeout_sec": 1, 01:07:55.439 "ack_timeout": 0, 01:07:55.439 "data_wr_pool_size": 0 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "nvmf_create_subsystem", 01:07:55.439 "params": { 01:07:55.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:07:55.439 "allow_any_host": false, 01:07:55.439 "serial_number": "SPDK00000000000001", 01:07:55.439 "model_number": "SPDK bdev Controller", 01:07:55.439 "max_namespaces": 10, 01:07:55.439 "min_cntlid": 1, 01:07:55.439 "max_cntlid": 65519, 01:07:55.439 "ana_reporting": false 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "nvmf_subsystem_add_host", 01:07:55.439 "params": { 01:07:55.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:07:55.439 "host": "nqn.2016-06.io.spdk:host1", 01:07:55.439 "psk": "key0" 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "nvmf_subsystem_add_ns", 01:07:55.439 "params": { 01:07:55.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:07:55.439 "namespace": { 01:07:55.439 "nsid": 1, 01:07:55.439 "bdev_name": "malloc0", 01:07:55.439 "nguid": "02DCAC9A9FB94A6C8A79002212DECF11", 01:07:55.439 "uuid": "02dcac9a-9fb9-4a6c-8a79-002212decf11", 01:07:55.439 "no_auto_visible": false 01:07:55.439 } 01:07:55.439 } 01:07:55.439 }, 01:07:55.439 { 01:07:55.439 "method": "nvmf_subsystem_add_listener", 01:07:55.439 "params": { 01:07:55.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:07:55.439 "listen_address": { 01:07:55.439 "trtype": "TCP", 01:07:55.439 "adrfam": "IPv4", 01:07:55.439 "traddr": "10.0.0.3", 01:07:55.439 "trsvcid": "4420" 01:07:55.439 }, 01:07:55.439 "secure_channel": true 01:07:55.439 } 01:07:55.439 } 01:07:55.439 ] 01:07:55.439 } 01:07:55.439 ] 01:07:55.439 }' 01:07:55.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71571 01:07:55.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 01:07:55.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:55.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71571 01:07:55.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71571 ']' 01:07:55.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:55.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:55.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:55.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:55.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:55.439 [2024-12-09 06:06:49.871684] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:55.439 [2024-12-09 06:06:49.871884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:55.439 [2024-12-09 06:06:50.005373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:55.699 [2024-12-09 06:06:50.084994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:55.699 [2024-12-09 06:06:50.085192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:55.699 [2024-12-09 06:06:50.085213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:55.699 [2024-12-09 06:06:50.085223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:55.699 [2024-12-09 06:06:50.085231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:55.699 [2024-12-09 06:06:50.085627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:55.699 [2024-12-09 06:06:50.240905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:55.959 [2024-12-09 06:06:50.313196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:55.959 [2024-12-09 06:06:50.345064] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:07:55.959 [2024-12-09 06:06:50.345280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:07:56.218 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:56.218 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:56.218 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:07:56.218 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:07:56.218 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:56.479 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:56.479 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71603 01:07:56.479 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71603 /var/tmp/bdevperf.sock 01:07:56.479 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71603 ']' 01:07:56.479 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 01:07:56.479 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:56.479 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:56.479 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 01:07:56.479 "subsystems": [ 01:07:56.479 { 01:07:56.479 "subsystem": "keyring", 01:07:56.479 "config": [ 01:07:56.479 { 01:07:56.479 "method": "keyring_file_add_key", 01:07:56.479 "params": { 01:07:56.479 "name": "key0", 01:07:56.479 "path": "/tmp/tmp.zgX4aXhYc3" 01:07:56.479 } 01:07:56.479 } 01:07:56.479 ] 01:07:56.479 }, 01:07:56.479 { 01:07:56.479 "subsystem": "iobuf", 01:07:56.479 "config": [ 01:07:56.479 { 01:07:56.479 "method": "iobuf_set_options", 01:07:56.479 "params": { 01:07:56.479 "small_pool_count": 8192, 01:07:56.479 "large_pool_count": 1024, 01:07:56.479 "small_bufsize": 8192, 01:07:56.479 "large_bufsize": 135168, 01:07:56.479 "enable_numa": false 01:07:56.479 } 01:07:56.479 } 01:07:56.479 ] 01:07:56.479 }, 01:07:56.479 { 01:07:56.479 "subsystem": "sock", 01:07:56.479 "config": [ 01:07:56.479 { 01:07:56.479 "method": "sock_set_default_impl", 01:07:56.479 "params": { 01:07:56.479 "impl_name": "uring" 01:07:56.479 } 01:07:56.479 }, 01:07:56.479 { 01:07:56.479 "method": "sock_impl_set_options", 01:07:56.479 "params": { 01:07:56.479 "impl_name": "ssl", 01:07:56.479 "recv_buf_size": 4096, 01:07:56.479 "send_buf_size": 4096, 01:07:56.479 "enable_recv_pipe": true, 01:07:56.479 "enable_quickack": false, 01:07:56.479 "enable_placement_id": 0, 01:07:56.479 "enable_zerocopy_send_server": true, 01:07:56.479 "enable_zerocopy_send_client": false, 01:07:56.479 "zerocopy_threshold": 0, 01:07:56.479 "tls_version": 0, 01:07:56.480 "enable_ktls": false 01:07:56.480 } 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "method": "sock_impl_set_options", 01:07:56.480 "params": { 01:07:56.480 "impl_name": "posix", 01:07:56.480 "recv_buf_size": 2097152, 01:07:56.480 "send_buf_size": 2097152, 01:07:56.480 "enable_recv_pipe": true, 01:07:56.480 "enable_quickack": false, 01:07:56.480 "enable_placement_id": 0, 01:07:56.480 "enable_zerocopy_send_server": true, 01:07:56.480 "enable_zerocopy_send_client": false, 01:07:56.480 "zerocopy_threshold": 0, 01:07:56.480 "tls_version": 0, 01:07:56.480 "enable_ktls": false 01:07:56.480 } 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "method": "sock_impl_set_options", 01:07:56.480 "params": { 01:07:56.480 "impl_name": "uring", 01:07:56.480 "recv_buf_size": 2097152, 01:07:56.480 "send_buf_size": 2097152, 01:07:56.480 "enable_recv_pipe": true, 01:07:56.480 "enable_quickack": false, 01:07:56.480 "enable_placement_id": 0, 01:07:56.480 "enable_zerocopy_send_server": false, 01:07:56.480 "enable_zerocopy_send_client": false, 01:07:56.480 "zerocopy_threshold": 0, 01:07:56.480 "tls_version": 0, 01:07:56.480 "enable_ktls": false 01:07:56.480 } 01:07:56.480 } 01:07:56.480 ] 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "subsystem": "vmd", 01:07:56.480 "config": [] 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "subsystem": "accel", 01:07:56.480 "config": [ 01:07:56.480 { 01:07:56.480 "method": "accel_set_options", 01:07:56.480 "params": { 01:07:56.480 "small_cache_size": 128, 01:07:56.480 "large_cache_size": 16, 01:07:56.480 "task_count": 2048, 01:07:56.480 "sequence_count": 2048, 01:07:56.480 "buf_count": 2048 01:07:56.480 } 01:07:56.480 } 01:07:56.480 ] 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "subsystem": "bdev", 01:07:56.480 "config": [ 01:07:56.480 { 01:07:56.480 "method": "bdev_set_options", 01:07:56.480 "params": { 01:07:56.480 "bdev_io_pool_size": 65535, 01:07:56.480 "bdev_io_cache_size": 256, 01:07:56.480 "bdev_auto_examine": true, 01:07:56.480 "iobuf_small_cache_size": 128, 01:07:56.480 "iobuf_large_cache_size": 16 01:07:56.480 } 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "method": "bdev_raid_set_options", 01:07:56.480 "params": { 01:07:56.480 "process_window_size_kb": 1024, 01:07:56.480 "process_max_bandwidth_mb_sec": 0 01:07:56.480 } 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "method": "bdev_iscsi_set_options", 01:07:56.480 "params": { 01:07:56.480 "timeout_sec": 30 01:07:56.480 } 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "method": "bdev_nvme_set_options", 01:07:56.480 "params": { 01:07:56.480 "action_on_timeout": "none", 01:07:56.480 "timeout_us": 0, 01:07:56.480 "timeout_admin_us": 0, 01:07:56.480 "keep_alive_timeout_ms": 10000, 01:07:56.480 "arbitration_burst": 0, 01:07:56.480 "low_priority_weight": 0, 01:07:56.480 "medium_priority_weight": 0, 01:07:56.480 "high_priority_weight": 0, 01:07:56.480 "nvme_adminq_poll_period_us": 10000, 01:07:56.480 "nvme_ioq_poll_period_us": 0, 01:07:56.480 "io_queue_requests": 512, 01:07:56.480 "delay_cmd_submit": true, 01:07:56.480 "transport_retry_count": 4, 01:07:56.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:56.480 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:56.480 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:56.480 "bdev_retry_count": 3, 01:07:56.480 "transport_ack_timeout": 0, 01:07:56.480 "ctrlr_loss_timeout_sec": 0, 01:07:56.480 "reconnect_delay_sec": 0, 01:07:56.480 "fast_io_fail_timeout_sec": 0, 01:07:56.480 "disable_auto_failback": false, 01:07:56.480 "generate_uuids": false, 01:07:56.480 "transport_tos": 0, 01:07:56.480 "nvme_error_stat": false, 01:07:56.480 "rdma_srq_size": 0, 01:07:56.480 "io_path_stat": false, 01:07:56.480 "allow_accel_sequence": false, 01:07:56.480 "rdma_max_cq_size": 0, 01:07:56.480 "rdma_cm_event_timeout_ms": 0, 01:07:56.480 "dhchap_digests": [ 01:07:56.480 "sha256", 01:07:56.480 "sha384", 01:07:56.480 "sha512" 01:07:56.480 ], 01:07:56.480 "dhchap_dhgroups": [ 01:07:56.480 "null", 01:07:56.480 "ffdhe2048", 01:07:56.480 "ffdhe3072", 01:07:56.480 "ffdhe4096", 01:07:56.480 "ffdhe6144", 01:07:56.480 "ffdhe8192" 01:07:56.480 ] 01:07:56.480 } 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "method": "bdev_nvme_attach_controller", 01:07:56.480 "params": { 01:07:56.480 "name": "TLSTEST", 01:07:56.480 "trtype": "TCP", 01:07:56.480 "adrfam": "IPv4", 01:07:56.480 "traddr": "10.0.0.3", 01:07:56.480 "trsvcid": "4420", 01:07:56.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:07:56.480 "prchk_reftag": false, 01:07:56.480 "prchk_guard": false, 01:07:56.480 "ctrlr_loss_timeout_sec": 0, 01:07:56.480 "reconnect_delay_sec": 0, 01:07:56.480 "fast_io_fail_timeout_sec": 0, 01:07:56.480 "psk": "key0", 01:07:56.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:07:56.480 "hdgst": false, 01:07:56.480 "ddgst": false, 01:07:56.480 "multipath": "multipath" 01:07:56.480 } 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "method": "bdev_nvme_set_hotplug", 01:07:56.480 "params": { 01:07:56.480 "period_us": 100000, 01:07:56.480 "enable": false 01:07:56.480 } 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "method": "bdev_wait_for_examine" 01:07:56.480 } 01:07:56.480 ] 01:07:56.480 }, 01:07:56.480 { 01:07:56.480 "subsystem": "nbd", 01:07:56.480 "config": [] 01:07:56.480 } 01:07:56.480 ] 01:07:56.480 }' 01:07:56.480 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:07:56.480 [2024-12-09 06:06:50.861676] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:56.480 [2024-12-09 06:06:50.861864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71603 ] 01:07:56.480 [2024-12-09 06:06:51.010141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:07:56.480 [2024-12-09 06:06:51.050305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:07:56.740 [2024-12-09 06:06:51.173800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:07:56.740 [2024-12-09 06:06:51.216176] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:07:57.309 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:57.309 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:07:57.309 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:07:57.309 Running I/O for 10 seconds... 01:07:59.625 5671.00 IOPS, 22.15 MiB/s [2024-12-09T06:06:55.146Z] 5738.00 IOPS, 22.41 MiB/s [2024-12-09T06:06:56.083Z] 5747.00 IOPS, 22.45 MiB/s [2024-12-09T06:06:57.019Z] 5758.50 IOPS, 22.49 MiB/s [2024-12-09T06:06:57.955Z] 5765.40 IOPS, 22.52 MiB/s [2024-12-09T06:06:58.906Z] 5768.83 IOPS, 22.53 MiB/s [2024-12-09T06:06:59.932Z] 5770.29 IOPS, 22.54 MiB/s [2024-12-09T06:07:00.870Z] 5764.88 IOPS, 22.52 MiB/s [2024-12-09T06:07:02.251Z] 5762.56 IOPS, 22.51 MiB/s [2024-12-09T06:07:02.251Z] 5762.60 IOPS, 22.51 MiB/s 01:08:07.664 Latency(us) 01:08:07.664 [2024-12-09T06:07:02.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:07.664 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:08:07.664 Verification LBA range: start 0x0 length 0x2000 01:08:07.664 TLSTESTn1 : 10.01 5768.65 22.53 0.00 0.00 22156.89 4474.35 21687.42 01:08:07.664 [2024-12-09T06:07:02.251Z] =================================================================================================================== 01:08:07.664 [2024-12-09T06:07:02.251Z] Total : 5768.65 22.53 0.00 0.00 22156.89 4474.35 21687.42 01:08:07.664 { 01:08:07.664 "results": [ 01:08:07.664 { 01:08:07.664 "job": "TLSTESTn1", 01:08:07.664 "core_mask": "0x4", 01:08:07.664 "workload": "verify", 01:08:07.664 "status": "finished", 01:08:07.664 "verify_range": { 01:08:07.664 "start": 0, 01:08:07.664 "length": 8192 01:08:07.664 }, 01:08:07.664 "queue_depth": 128, 01:08:07.664 "io_size": 4096, 01:08:07.664 "runtime": 10.011706, 01:08:07.664 "iops": 5768.647221562439, 01:08:07.664 "mibps": 22.533778209228277, 01:08:07.664 "io_failed": 0, 01:08:07.664 "io_timeout": 0, 01:08:07.664 "avg_latency_us": 22156.889579330586, 01:08:07.664 "min_latency_us": 4474.345381526105, 01:08:07.664 "max_latency_us": 21687.415261044178 01:08:07.664 } 01:08:07.664 ], 01:08:07.664 "core_count": 1 01:08:07.664 } 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71603 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71603 ']' 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71603 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71603 01:08:07.664 killing process with pid 71603 01:08:07.664 Received shutdown signal, test time was about 10.000000 seconds 01:08:07.664 01:08:07.664 Latency(us) 01:08:07.664 [2024-12-09T06:07:02.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:07.664 [2024-12-09T06:07:02.251Z] =================================================================================================================== 01:08:07.664 [2024-12-09T06:07:02.251Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71603' 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71603 01:08:07.664 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71603 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71571 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71571 ']' 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71571 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71571 01:08:07.664 killing process with pid 71571 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71571' 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71571 01:08:07.664 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71571 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71740 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71740 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71740 ']' 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:07.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:07.924 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:07.924 [2024-12-09 06:07:02.498452] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:07.924 [2024-12-09 06:07:02.498741] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:08.184 [2024-12-09 06:07:02.649720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:08.184 [2024-12-09 06:07:02.687845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:08.184 [2024-12-09 06:07:02.687895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:08.184 [2024-12-09 06:07:02.687905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:08.184 [2024-12-09 06:07:02.687913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:08.184 [2024-12-09 06:07:02.687919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:08.184 [2024-12-09 06:07:02.688187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:08.184 [2024-12-09 06:07:02.729948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zgX4aXhYc3 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zgX4aXhYc3 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:08:09.124 [2024-12-09 06:07:03.630232] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:09.124 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:08:09.384 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:08:09.644 [2024-12-09 06:07:04.033623] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:08:09.644 [2024-12-09 06:07:04.033936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:09.644 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:08:09.903 malloc0 01:08:09.903 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:08:09.903 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zgX4aXhYc3 01:08:10.163 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=71791 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 71791 /var/tmp/bdevperf.sock 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71791 ']' 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:10.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:10.423 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:10.423 [2024-12-09 06:07:04.918880] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:10.423 [2024-12-09 06:07:04.918950] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71791 ] 01:08:10.684 [2024-12-09 06:07:05.072492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:10.684 [2024-12-09 06:07:05.129077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:10.684 [2024-12-09 06:07:05.201015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:11.251 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:11.251 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:08:11.251 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zgX4aXhYc3 01:08:11.510 06:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:08:11.768 [2024-12-09 06:07:06.163589] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:08:11.768 nvme0n1 01:08:11.768 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:11.768 Running I/O for 1 seconds... 01:08:13.144 5661.00 IOPS, 22.11 MiB/s 01:08:13.144 Latency(us) 01:08:13.144 [2024-12-09T06:07:07.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:13.144 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:08:13.144 Verification LBA range: start 0x0 length 0x2000 01:08:13.144 nvme0n1 : 1.01 5711.55 22.31 0.00 0.00 22245.50 4948.10 17370.99 01:08:13.144 [2024-12-09T06:07:07.731Z] =================================================================================================================== 01:08:13.144 [2024-12-09T06:07:07.731Z] Total : 5711.55 22.31 0.00 0.00 22245.50 4948.10 17370.99 01:08:13.144 { 01:08:13.144 "results": [ 01:08:13.144 { 01:08:13.144 "job": "nvme0n1", 01:08:13.144 "core_mask": "0x2", 01:08:13.144 "workload": "verify", 01:08:13.144 "status": "finished", 01:08:13.144 "verify_range": { 01:08:13.144 "start": 0, 01:08:13.144 "length": 8192 01:08:13.144 }, 01:08:13.144 "queue_depth": 128, 01:08:13.144 "io_size": 4096, 01:08:13.144 "runtime": 1.013561, 01:08:13.144 "iops": 5711.545728377473, 01:08:13.144 "mibps": 22.310725501474504, 01:08:13.144 "io_failed": 0, 01:08:13.144 "io_timeout": 0, 01:08:13.144 "avg_latency_us": 22245.50247505829, 01:08:13.144 "min_latency_us": 4948.0995983935745, 01:08:13.144 "max_latency_us": 17370.98795180723 01:08:13.144 } 01:08:13.144 ], 01:08:13.144 "core_count": 1 01:08:13.144 } 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 71791 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71791 ']' 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71791 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71791 01:08:13.144 killing process with pid 71791 01:08:13.144 Received shutdown signal, test time was about 1.000000 seconds 01:08:13.144 01:08:13.144 Latency(us) 01:08:13.144 [2024-12-09T06:07:07.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:13.144 [2024-12-09T06:07:07.731Z] =================================================================================================================== 01:08:13.144 [2024-12-09T06:07:07.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71791' 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71791 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71791 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 71740 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71740 ']' 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71740 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71740 01:08:13.144 killing process with pid 71740 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71740' 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71740 01:08:13.144 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71740 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71837 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71837 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71837 ']' 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:13.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:13.403 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:13.403 [2024-12-09 06:07:07.946267] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:13.403 [2024-12-09 06:07:07.946331] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:13.661 [2024-12-09 06:07:08.078634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:13.661 [2024-12-09 06:07:08.118825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:13.661 [2024-12-09 06:07:08.118870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:13.661 [2024-12-09 06:07:08.118880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:13.661 [2024-12-09 06:07:08.118888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:13.661 [2024-12-09 06:07:08.118895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:13.661 [2024-12-09 06:07:08.119191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:13.661 [2024-12-09 06:07:08.161197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:14.599 [2024-12-09 06:07:08.905602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:14.599 malloc0 01:08:14.599 [2024-12-09 06:07:08.934272] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:08:14.599 [2024-12-09 06:07:08.934450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=71869 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 71869 /var/tmp/bdevperf.sock 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71869 ']' 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:14.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:14.599 06:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:14.599 [2024-12-09 06:07:09.012953] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:14.599 [2024-12-09 06:07:09.013021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71869 ] 01:08:14.599 [2024-12-09 06:07:09.151549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:14.858 [2024-12-09 06:07:09.215898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:14.858 [2024-12-09 06:07:09.286802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:15.424 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:15.424 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:08:15.425 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zgX4aXhYc3 01:08:15.683 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:08:15.941 [2024-12-09 06:07:10.273964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:08:15.941 nvme0n1 01:08:15.941 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:15.941 Running I/O for 1 seconds... 01:08:17.140 5695.00 IOPS, 22.25 MiB/s 01:08:17.140 Latency(us) 01:08:17.140 [2024-12-09T06:07:11.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:17.140 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:08:17.140 Verification LBA range: start 0x0 length 0x2000 01:08:17.140 nvme0n1 : 1.01 5748.30 22.45 0.00 0.00 22109.01 4579.62 18318.50 01:08:17.140 [2024-12-09T06:07:11.727Z] =================================================================================================================== 01:08:17.140 [2024-12-09T06:07:11.727Z] Total : 5748.30 22.45 0.00 0.00 22109.01 4579.62 18318.50 01:08:17.140 { 01:08:17.140 "results": [ 01:08:17.140 { 01:08:17.140 "job": "nvme0n1", 01:08:17.140 "core_mask": "0x2", 01:08:17.140 "workload": "verify", 01:08:17.140 "status": "finished", 01:08:17.140 "verify_range": { 01:08:17.140 "start": 0, 01:08:17.140 "length": 8192 01:08:17.140 }, 01:08:17.140 "queue_depth": 128, 01:08:17.140 "io_size": 4096, 01:08:17.140 "runtime": 1.012995, 01:08:17.140 "iops": 5748.300830704989, 01:08:17.140 "mibps": 22.45430011994136, 01:08:17.140 "io_failed": 0, 01:08:17.140 "io_timeout": 0, 01:08:17.140 "avg_latency_us": 22109.0137553132, 01:08:17.140 "min_latency_us": 4579.6240963855425, 01:08:17.140 "max_latency_us": 18318.49638554217 01:08:17.140 } 01:08:17.140 ], 01:08:17.140 "core_count": 1 01:08:17.140 } 01:08:17.140 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 01:08:17.140 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:17.140 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:17.140 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:17.140 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 01:08:17.140 "subsystems": [ 01:08:17.140 { 01:08:17.140 "subsystem": "keyring", 01:08:17.140 "config": [ 01:08:17.140 { 01:08:17.140 "method": "keyring_file_add_key", 01:08:17.140 "params": { 01:08:17.140 "name": "key0", 01:08:17.140 "path": "/tmp/tmp.zgX4aXhYc3" 01:08:17.140 } 01:08:17.140 } 01:08:17.140 ] 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "subsystem": "iobuf", 01:08:17.140 "config": [ 01:08:17.140 { 01:08:17.140 "method": "iobuf_set_options", 01:08:17.140 "params": { 01:08:17.140 "small_pool_count": 8192, 01:08:17.140 "large_pool_count": 1024, 01:08:17.140 "small_bufsize": 8192, 01:08:17.140 "large_bufsize": 135168, 01:08:17.140 "enable_numa": false 01:08:17.140 } 01:08:17.140 } 01:08:17.140 ] 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "subsystem": "sock", 01:08:17.140 "config": [ 01:08:17.140 { 01:08:17.140 "method": "sock_set_default_impl", 01:08:17.140 "params": { 01:08:17.140 "impl_name": "uring" 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "sock_impl_set_options", 01:08:17.140 "params": { 01:08:17.140 "impl_name": "ssl", 01:08:17.140 "recv_buf_size": 4096, 01:08:17.140 "send_buf_size": 4096, 01:08:17.140 "enable_recv_pipe": true, 01:08:17.140 "enable_quickack": false, 01:08:17.140 "enable_placement_id": 0, 01:08:17.140 "enable_zerocopy_send_server": true, 01:08:17.140 "enable_zerocopy_send_client": false, 01:08:17.140 "zerocopy_threshold": 0, 01:08:17.140 "tls_version": 0, 01:08:17.140 "enable_ktls": false 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "sock_impl_set_options", 01:08:17.140 "params": { 01:08:17.140 "impl_name": "posix", 01:08:17.140 "recv_buf_size": 2097152, 01:08:17.140 "send_buf_size": 2097152, 01:08:17.140 "enable_recv_pipe": true, 01:08:17.140 "enable_quickack": false, 01:08:17.140 "enable_placement_id": 0, 01:08:17.140 "enable_zerocopy_send_server": true, 01:08:17.140 "enable_zerocopy_send_client": false, 01:08:17.140 "zerocopy_threshold": 0, 01:08:17.140 "tls_version": 0, 01:08:17.140 "enable_ktls": false 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "sock_impl_set_options", 01:08:17.140 "params": { 01:08:17.140 "impl_name": "uring", 01:08:17.140 "recv_buf_size": 2097152, 01:08:17.140 "send_buf_size": 2097152, 01:08:17.140 "enable_recv_pipe": true, 01:08:17.140 "enable_quickack": false, 01:08:17.140 "enable_placement_id": 0, 01:08:17.140 "enable_zerocopy_send_server": false, 01:08:17.140 "enable_zerocopy_send_client": false, 01:08:17.140 "zerocopy_threshold": 0, 01:08:17.140 "tls_version": 0, 01:08:17.140 "enable_ktls": false 01:08:17.140 } 01:08:17.140 } 01:08:17.140 ] 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "subsystem": "vmd", 01:08:17.140 "config": [] 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "subsystem": "accel", 01:08:17.140 "config": [ 01:08:17.140 { 01:08:17.140 "method": "accel_set_options", 01:08:17.140 "params": { 01:08:17.140 "small_cache_size": 128, 01:08:17.140 "large_cache_size": 16, 01:08:17.140 "task_count": 2048, 01:08:17.140 "sequence_count": 2048, 01:08:17.140 "buf_count": 2048 01:08:17.140 } 01:08:17.140 } 01:08:17.140 ] 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "subsystem": "bdev", 01:08:17.140 "config": [ 01:08:17.140 { 01:08:17.140 "method": "bdev_set_options", 01:08:17.140 "params": { 01:08:17.140 "bdev_io_pool_size": 65535, 01:08:17.140 "bdev_io_cache_size": 256, 01:08:17.140 "bdev_auto_examine": true, 01:08:17.140 "iobuf_small_cache_size": 128, 01:08:17.140 "iobuf_large_cache_size": 16 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "bdev_raid_set_options", 01:08:17.140 "params": { 01:08:17.140 "process_window_size_kb": 1024, 01:08:17.140 "process_max_bandwidth_mb_sec": 0 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "bdev_iscsi_set_options", 01:08:17.140 "params": { 01:08:17.140 "timeout_sec": 30 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "bdev_nvme_set_options", 01:08:17.140 "params": { 01:08:17.140 "action_on_timeout": "none", 01:08:17.140 "timeout_us": 0, 01:08:17.140 "timeout_admin_us": 0, 01:08:17.140 "keep_alive_timeout_ms": 10000, 01:08:17.140 "arbitration_burst": 0, 01:08:17.140 "low_priority_weight": 0, 01:08:17.140 "medium_priority_weight": 0, 01:08:17.140 "high_priority_weight": 0, 01:08:17.140 "nvme_adminq_poll_period_us": 10000, 01:08:17.140 "nvme_ioq_poll_period_us": 0, 01:08:17.140 "io_queue_requests": 0, 01:08:17.140 "delay_cmd_submit": true, 01:08:17.140 "transport_retry_count": 4, 01:08:17.140 "bdev_retry_count": 3, 01:08:17.140 "transport_ack_timeout": 0, 01:08:17.140 "ctrlr_loss_timeout_sec": 0, 01:08:17.140 "reconnect_delay_sec": 0, 01:08:17.140 "fast_io_fail_timeout_sec": 0, 01:08:17.140 "disable_auto_failback": false, 01:08:17.140 "generate_uuids": false, 01:08:17.140 "transport_tos": 0, 01:08:17.140 "nvme_error_stat": false, 01:08:17.140 "rdma_srq_size": 0, 01:08:17.140 "io_path_stat": false, 01:08:17.140 "allow_accel_sequence": false, 01:08:17.140 "rdma_max_cq_size": 0, 01:08:17.140 "rdma_cm_event_timeout_ms": 0, 01:08:17.140 "dhchap_digests": [ 01:08:17.140 "sha256", 01:08:17.140 "sha384", 01:08:17.140 "sha512" 01:08:17.140 ], 01:08:17.140 "dhchap_dhgroups": [ 01:08:17.140 "null", 01:08:17.140 "ffdhe2048", 01:08:17.140 "ffdhe3072", 01:08:17.140 "ffdhe4096", 01:08:17.140 "ffdhe6144", 01:08:17.140 "ffdhe8192" 01:08:17.140 ] 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "bdev_nvme_set_hotplug", 01:08:17.140 "params": { 01:08:17.140 "period_us": 100000, 01:08:17.140 "enable": false 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "bdev_malloc_create", 01:08:17.140 "params": { 01:08:17.140 "name": "malloc0", 01:08:17.140 "num_blocks": 8192, 01:08:17.140 "block_size": 4096, 01:08:17.140 "physical_block_size": 4096, 01:08:17.140 "uuid": "f9059cbb-e85d-43f4-8b1b-afe9826606b7", 01:08:17.140 "optimal_io_boundary": 0, 01:08:17.140 "md_size": 0, 01:08:17.140 "dif_type": 0, 01:08:17.140 "dif_is_head_of_md": false, 01:08:17.140 "dif_pi_format": 0 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "bdev_wait_for_examine" 01:08:17.140 } 01:08:17.140 ] 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "subsystem": "nbd", 01:08:17.140 "config": [] 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "subsystem": "scheduler", 01:08:17.140 "config": [ 01:08:17.140 { 01:08:17.140 "method": "framework_set_scheduler", 01:08:17.140 "params": { 01:08:17.140 "name": "static" 01:08:17.140 } 01:08:17.140 } 01:08:17.140 ] 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "subsystem": "nvmf", 01:08:17.140 "config": [ 01:08:17.140 { 01:08:17.140 "method": "nvmf_set_config", 01:08:17.140 "params": { 01:08:17.140 "discovery_filter": "match_any", 01:08:17.140 "admin_cmd_passthru": { 01:08:17.140 "identify_ctrlr": false 01:08:17.140 }, 01:08:17.140 "dhchap_digests": [ 01:08:17.140 "sha256", 01:08:17.140 "sha384", 01:08:17.140 "sha512" 01:08:17.140 ], 01:08:17.140 "dhchap_dhgroups": [ 01:08:17.140 "null", 01:08:17.140 "ffdhe2048", 01:08:17.140 "ffdhe3072", 01:08:17.140 "ffdhe4096", 01:08:17.140 "ffdhe6144", 01:08:17.140 "ffdhe8192" 01:08:17.140 ] 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "nvmf_set_max_subsystems", 01:08:17.140 "params": { 01:08:17.140 "max_subsystems": 1024 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "nvmf_set_crdt", 01:08:17.140 "params": { 01:08:17.140 "crdt1": 0, 01:08:17.140 "crdt2": 0, 01:08:17.140 "crdt3": 0 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "nvmf_create_transport", 01:08:17.140 "params": { 01:08:17.140 "trtype": "TCP", 01:08:17.140 "max_queue_depth": 128, 01:08:17.140 "max_io_qpairs_per_ctrlr": 127, 01:08:17.140 "in_capsule_data_size": 4096, 01:08:17.140 "max_io_size": 131072, 01:08:17.140 "io_unit_size": 131072, 01:08:17.140 "max_aq_depth": 128, 01:08:17.140 "num_shared_buffers": 511, 01:08:17.140 "buf_cache_size": 4294967295, 01:08:17.140 "dif_insert_or_strip": false, 01:08:17.140 "zcopy": false, 01:08:17.140 "c2h_success": false, 01:08:17.140 "sock_priority": 0, 01:08:17.140 "abort_timeout_sec": 1, 01:08:17.140 "ack_timeout": 0, 01:08:17.140 "data_wr_pool_size": 0 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "nvmf_create_subsystem", 01:08:17.140 "params": { 01:08:17.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:17.140 "allow_any_host": false, 01:08:17.140 "serial_number": "00000000000000000000", 01:08:17.140 "model_number": "SPDK bdev Controller", 01:08:17.140 "max_namespaces": 32, 01:08:17.140 "min_cntlid": 1, 01:08:17.140 "max_cntlid": 65519, 01:08:17.140 "ana_reporting": false 01:08:17.140 } 01:08:17.140 }, 01:08:17.140 { 01:08:17.140 "method": "nvmf_subsystem_add_host", 01:08:17.140 "params": { 01:08:17.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:17.140 "host": "nqn.2016-06.io.spdk:host1", 01:08:17.141 "psk": "key0" 01:08:17.141 } 01:08:17.141 }, 01:08:17.141 { 01:08:17.141 "method": "nvmf_subsystem_add_ns", 01:08:17.141 "params": { 01:08:17.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:17.141 "namespace": { 01:08:17.141 "nsid": 1, 01:08:17.141 "bdev_name": "malloc0", 01:08:17.141 "nguid": "F9059CBBE85D43F48B1BAFE9826606B7", 01:08:17.141 "uuid": "f9059cbb-e85d-43f4-8b1b-afe9826606b7", 01:08:17.141 "no_auto_visible": false 01:08:17.141 } 01:08:17.141 } 01:08:17.141 }, 01:08:17.141 { 01:08:17.141 "method": "nvmf_subsystem_add_listener", 01:08:17.141 "params": { 01:08:17.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:17.141 "listen_address": { 01:08:17.141 "trtype": "TCP", 01:08:17.141 "adrfam": "IPv4", 01:08:17.141 "traddr": "10.0.0.3", 01:08:17.141 "trsvcid": "4420" 01:08:17.141 }, 01:08:17.141 "secure_channel": false, 01:08:17.141 "sock_impl": "ssl" 01:08:17.141 } 01:08:17.141 } 01:08:17.141 ] 01:08:17.141 } 01:08:17.141 ] 01:08:17.141 }' 01:08:17.141 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:08:17.401 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 01:08:17.401 "subsystems": [ 01:08:17.401 { 01:08:17.401 "subsystem": "keyring", 01:08:17.401 "config": [ 01:08:17.401 { 01:08:17.401 "method": "keyring_file_add_key", 01:08:17.401 "params": { 01:08:17.401 "name": "key0", 01:08:17.401 "path": "/tmp/tmp.zgX4aXhYc3" 01:08:17.401 } 01:08:17.401 } 01:08:17.401 ] 01:08:17.401 }, 01:08:17.401 { 01:08:17.401 "subsystem": "iobuf", 01:08:17.401 "config": [ 01:08:17.401 { 01:08:17.401 "method": "iobuf_set_options", 01:08:17.401 "params": { 01:08:17.401 "small_pool_count": 8192, 01:08:17.401 "large_pool_count": 1024, 01:08:17.401 "small_bufsize": 8192, 01:08:17.401 "large_bufsize": 135168, 01:08:17.401 "enable_numa": false 01:08:17.401 } 01:08:17.401 } 01:08:17.401 ] 01:08:17.401 }, 01:08:17.401 { 01:08:17.401 "subsystem": "sock", 01:08:17.401 "config": [ 01:08:17.401 { 01:08:17.401 "method": "sock_set_default_impl", 01:08:17.401 "params": { 01:08:17.401 "impl_name": "uring" 01:08:17.401 } 01:08:17.401 }, 01:08:17.401 { 01:08:17.401 "method": "sock_impl_set_options", 01:08:17.401 "params": { 01:08:17.401 "impl_name": "ssl", 01:08:17.401 "recv_buf_size": 4096, 01:08:17.401 "send_buf_size": 4096, 01:08:17.401 "enable_recv_pipe": true, 01:08:17.401 "enable_quickack": false, 01:08:17.401 "enable_placement_id": 0, 01:08:17.401 "enable_zerocopy_send_server": true, 01:08:17.401 "enable_zerocopy_send_client": false, 01:08:17.401 "zerocopy_threshold": 0, 01:08:17.401 "tls_version": 0, 01:08:17.401 "enable_ktls": false 01:08:17.401 } 01:08:17.401 }, 01:08:17.401 { 01:08:17.401 "method": "sock_impl_set_options", 01:08:17.401 "params": { 01:08:17.401 "impl_name": "posix", 01:08:17.401 "recv_buf_size": 2097152, 01:08:17.401 "send_buf_size": 2097152, 01:08:17.401 "enable_recv_pipe": true, 01:08:17.401 "enable_quickack": false, 01:08:17.401 "enable_placement_id": 0, 01:08:17.401 "enable_zerocopy_send_server": true, 01:08:17.401 "enable_zerocopy_send_client": false, 01:08:17.401 "zerocopy_threshold": 0, 01:08:17.401 "tls_version": 0, 01:08:17.401 "enable_ktls": false 01:08:17.401 } 01:08:17.401 }, 01:08:17.401 { 01:08:17.401 "method": "sock_impl_set_options", 01:08:17.401 "params": { 01:08:17.401 "impl_name": "uring", 01:08:17.401 "recv_buf_size": 2097152, 01:08:17.401 "send_buf_size": 2097152, 01:08:17.401 "enable_recv_pipe": true, 01:08:17.401 "enable_quickack": false, 01:08:17.401 "enable_placement_id": 0, 01:08:17.401 "enable_zerocopy_send_server": false, 01:08:17.401 "enable_zerocopy_send_client": false, 01:08:17.401 "zerocopy_threshold": 0, 01:08:17.401 "tls_version": 0, 01:08:17.401 "enable_ktls": false 01:08:17.401 } 01:08:17.401 } 01:08:17.401 ] 01:08:17.401 }, 01:08:17.401 { 01:08:17.401 "subsystem": "vmd", 01:08:17.401 "config": [] 01:08:17.401 }, 01:08:17.401 { 01:08:17.401 "subsystem": "accel", 01:08:17.401 "config": [ 01:08:17.401 { 01:08:17.401 "method": "accel_set_options", 01:08:17.401 "params": { 01:08:17.401 "small_cache_size": 128, 01:08:17.401 "large_cache_size": 16, 01:08:17.401 "task_count": 2048, 01:08:17.401 "sequence_count": 2048, 01:08:17.401 "buf_count": 2048 01:08:17.401 } 01:08:17.401 } 01:08:17.401 ] 01:08:17.401 }, 01:08:17.401 { 01:08:17.401 "subsystem": "bdev", 01:08:17.401 "config": [ 01:08:17.401 { 01:08:17.401 "method": "bdev_set_options", 01:08:17.401 "params": { 01:08:17.401 "bdev_io_pool_size": 65535, 01:08:17.401 "bdev_io_cache_size": 256, 01:08:17.401 "bdev_auto_examine": true, 01:08:17.401 "iobuf_small_cache_size": 128, 01:08:17.401 "iobuf_large_cache_size": 16 01:08:17.401 } 01:08:17.401 }, 01:08:17.401 { 01:08:17.401 "method": "bdev_raid_set_options", 01:08:17.401 "params": { 01:08:17.401 "process_window_size_kb": 1024, 01:08:17.402 "process_max_bandwidth_mb_sec": 0 01:08:17.402 } 01:08:17.402 }, 01:08:17.402 { 01:08:17.402 "method": "bdev_iscsi_set_options", 01:08:17.402 "params": { 01:08:17.402 "timeout_sec": 30 01:08:17.402 } 01:08:17.402 }, 01:08:17.402 { 01:08:17.402 "method": "bdev_nvme_set_options", 01:08:17.402 "params": { 01:08:17.402 "action_on_timeout": "none", 01:08:17.402 "timeout_us": 0, 01:08:17.402 "timeout_admin_us": 0, 01:08:17.402 "keep_alive_timeout_ms": 10000, 01:08:17.402 "arbitration_burst": 0, 01:08:17.402 "low_priority_weight": 0, 01:08:17.402 "medium_priority_weight": 0, 01:08:17.402 "high_priority_weight": 0, 01:08:17.402 "nvme_adminq_poll_period_us": 10000, 01:08:17.402 "nvme_ioq_poll_period_us": 0, 01:08:17.402 "io_queue_requests": 512, 01:08:17.402 "delay_cmd_submit": true, 01:08:17.402 "transport_retry_count": 4, 01:08:17.402 "bdev_retry_count": 3, 01:08:17.402 "transport_ack_timeout": 0, 01:08:17.402 "ctrlr_loss_timeout_sec": 0, 01:08:17.402 "reconnect_delay_sec": 0, 01:08:17.402 "fast_io_fail_timeout_sec": 0, 01:08:17.402 "disable_auto_failback": false, 01:08:17.402 "generate_uuids": false, 01:08:17.402 "transport_tos": 0, 01:08:17.402 "nvme_error_stat": false, 01:08:17.402 "rdma_srq_size": 0, 01:08:17.402 "io_path_stat": false, 01:08:17.402 "allow_accel_sequence": false, 01:08:17.402 "rdma_max_cq_size": 0, 01:08:17.402 "rdma_cm_event_timeout_ms": 0, 01:08:17.402 "dhchap_digests": [ 01:08:17.402 "sha256", 01:08:17.402 "sha384", 01:08:17.402 "sha512" 01:08:17.402 ], 01:08:17.402 "dhchap_dhgroups": [ 01:08:17.402 "null", 01:08:17.402 "ffdhe2048", 01:08:17.402 "ffdhe3072", 01:08:17.402 "ffdhe4096", 01:08:17.402 "ffdhe6144", 01:08:17.402 "ffdhe8192" 01:08:17.402 ] 01:08:17.402 } 01:08:17.402 }, 01:08:17.402 { 01:08:17.402 "method": "bdev_nvme_attach_controller", 01:08:17.402 "params": { 01:08:17.402 "name": "nvme0", 01:08:17.402 "trtype": "TCP", 01:08:17.402 "adrfam": "IPv4", 01:08:17.402 "traddr": "10.0.0.3", 01:08:17.402 "trsvcid": "4420", 01:08:17.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:08:17.402 "prchk_reftag": false, 01:08:17.402 "prchk_guard": false, 01:08:17.402 "ctrlr_loss_timeout_sec": 0, 01:08:17.402 "reconnect_delay_sec": 0, 01:08:17.402 "fast_io_fail_timeout_sec": 0, 01:08:17.402 "psk": "key0", 01:08:17.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:08:17.402 "hdgst": false, 01:08:17.402 "ddgst": false, 01:08:17.402 "multipath": "multipath" 01:08:17.402 } 01:08:17.402 }, 01:08:17.402 { 01:08:17.402 "method": "bdev_nvme_set_hotplug", 01:08:17.402 "params": { 01:08:17.402 "period_us": 100000, 01:08:17.402 "enable": false 01:08:17.402 } 01:08:17.402 }, 01:08:17.402 { 01:08:17.402 "method": "bdev_enable_histogram", 01:08:17.402 "params": { 01:08:17.402 "name": "nvme0n1", 01:08:17.402 "enable": true 01:08:17.402 } 01:08:17.402 }, 01:08:17.402 { 01:08:17.402 "method": "bdev_wait_for_examine" 01:08:17.402 } 01:08:17.402 ] 01:08:17.402 }, 01:08:17.402 { 01:08:17.402 "subsystem": "nbd", 01:08:17.402 "config": [] 01:08:17.402 } 01:08:17.402 ] 01:08:17.402 }' 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 71869 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71869 ']' 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71869 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71869 01:08:17.402 killing process with pid 71869 01:08:17.402 Received shutdown signal, test time was about 1.000000 seconds 01:08:17.402 01:08:17.402 Latency(us) 01:08:17.402 [2024-12-09T06:07:11.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:17.402 [2024-12-09T06:07:11.989Z] =================================================================================================================== 01:08:17.402 [2024-12-09T06:07:11.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71869' 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71869 01:08:17.402 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71869 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 71837 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71837 ']' 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71837 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71837 01:08:17.662 killing process with pid 71837 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71837' 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71837 01:08:17.662 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71837 01:08:17.922 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 01:08:17.922 "subsystems": [ 01:08:17.922 { 01:08:17.922 "subsystem": "keyring", 01:08:17.922 "config": [ 01:08:17.922 { 01:08:17.922 "method": "keyring_file_add_key", 01:08:17.922 "params": { 01:08:17.922 "name": "key0", 01:08:17.922 "path": "/tmp/tmp.zgX4aXhYc3" 01:08:17.922 } 01:08:17.922 } 01:08:17.922 ] 01:08:17.922 }, 01:08:17.922 { 01:08:17.922 "subsystem": "iobuf", 01:08:17.922 "config": [ 01:08:17.922 { 01:08:17.922 "method": "iobuf_set_options", 01:08:17.922 "params": { 01:08:17.922 "small_pool_count": 8192, 01:08:17.922 "large_pool_count": 1024, 01:08:17.922 "small_bufsize": 8192, 01:08:17.922 "large_bufsize": 135168, 01:08:17.922 "enable_numa": false 01:08:17.922 } 01:08:17.922 } 01:08:17.922 ] 01:08:17.922 }, 01:08:17.922 { 01:08:17.922 "subsystem": "sock", 01:08:17.922 "config": [ 01:08:17.922 { 01:08:17.922 "method": "sock_set_default_impl", 01:08:17.922 "params": { 01:08:17.922 "impl_name": "uring" 01:08:17.922 } 01:08:17.922 }, 01:08:17.922 { 01:08:17.922 "method": "sock_impl_set_options", 01:08:17.922 "params": { 01:08:17.922 "impl_name": "ssl", 01:08:17.922 "recv_buf_size": 4096, 01:08:17.922 "send_buf_size": 4096, 01:08:17.922 "enable_recv_pipe": true, 01:08:17.922 "enable_quickack": false, 01:08:17.922 "enable_placement_id": 0, 01:08:17.922 "enable_zerocopy_send_server": true, 01:08:17.922 "enable_zerocopy_send_client": false, 01:08:17.922 "zerocopy_threshold": 0, 01:08:17.922 "tls_version": 0, 01:08:17.922 "enable_ktls": false 01:08:17.922 } 01:08:17.922 }, 01:08:17.922 { 01:08:17.923 "method": "sock_impl_set_options", 01:08:17.923 "params": { 01:08:17.923 "impl_name": "posix", 01:08:17.923 "recv_buf_size": 2097152, 01:08:17.923 "send_buf_size": 2097152, 01:08:17.923 "enable_recv_pipe": true, 01:08:17.923 "enable_quickack": false, 01:08:17.923 "enable_placement_id": 0, 01:08:17.923 "enable_zerocopy_send_server": true, 01:08:17.923 "enable_zerocopy_send_client": false, 01:08:17.923 "zerocopy_threshold": 0, 01:08:17.923 "tls_version": 0, 01:08:17.923 "enable_ktls": false 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "sock_impl_set_options", 01:08:17.923 "params": { 01:08:17.923 "impl_name": "uring", 01:08:17.923 "recv_buf_size": 2097152, 01:08:17.923 "send_buf_size": 2097152, 01:08:17.923 "enable_recv_pipe": true, 01:08:17.923 "enable_quickack": false, 01:08:17.923 "enable_placement_id": 0, 01:08:17.923 "enable_zerocopy_send_server": false, 01:08:17.923 "enable_zerocopy_send_client": false, 01:08:17.923 "zerocopy_threshold": 0, 01:08:17.923 "tls_version": 0, 01:08:17.923 "enable_ktls": false 01:08:17.923 } 01:08:17.923 } 01:08:17.923 ] 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "subsystem": "vmd", 01:08:17.923 "config": [] 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "subsystem": "accel", 01:08:17.923 "config": [ 01:08:17.923 { 01:08:17.923 "method": "accel_set_options", 01:08:17.923 "params": { 01:08:17.923 "small_cache_size": 128, 01:08:17.923 "large_cache_size": 16, 01:08:17.923 "task_count": 2048, 01:08:17.923 "sequence_count": 2048, 01:08:17.923 "buf_count": 2048 01:08:17.923 } 01:08:17.923 } 01:08:17.923 ] 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "subsystem": "bdev", 01:08:17.923 "config": [ 01:08:17.923 { 01:08:17.923 "method": "bdev_set_options", 01:08:17.923 "params": { 01:08:17.923 "bdev_io_pool_size": 65535, 01:08:17.923 "bdev_io_cache_size": 256, 01:08:17.923 "bdev_auto_examine": true, 01:08:17.923 "iobuf_small_cache_size": 128, 01:08:17.923 "iobuf_large_cache_size": 16 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "bdev_raid_set_options", 01:08:17.923 "params": { 01:08:17.923 "process_window_size_kb": 1024, 01:08:17.923 "process_max_bandwidth_mb_sec": 0 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "bdev_iscsi_set_options", 01:08:17.923 "params": { 01:08:17.923 "timeout_sec": 30 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "bdev_nvme_set_options", 01:08:17.923 "params": { 01:08:17.923 "action_on_timeout": "none", 01:08:17.923 "timeout_us": 0, 01:08:17.923 "timeout_admin_us": 0, 01:08:17.923 "keep_alive_timeout_ms": 10000, 01:08:17.923 "arbitration_burst": 0, 01:08:17.923 "low_priority_weight": 0, 01:08:17.923 "medium_priority_weight": 0, 01:08:17.923 "high_priority_weight": 0, 01:08:17.923 "nvme_adminq_poll_period_us": 10000, 01:08:17.923 "nvme_ioq_poll_period_us": 0, 01:08:17.923 "io_queue_requests": 0, 01:08:17.923 "delay_cmd_submit": true, 01:08:17.923 "transport_retry_count": 4, 01:08:17.923 "bdev_retry_count": 3, 01:08:17.923 "transport_ack_timeout": 0, 01:08:17.923 "ctrlr_loss_timeout_sec": 0, 01:08:17.923 "reconnect_delay_sec": 0, 01:08:17.923 "fast_io_fail_timeout_sec": 0, 01:08:17.923 "disable_auto_failback": false, 01:08:17.923 "generate_uuids": false, 01:08:17.923 "transport_tos": 0, 01:08:17.923 "nvme_error_stat": false, 01:08:17.923 "rdma_srq_size": 0, 01:08:17.923 "io_path_stat": false, 01:08:17.923 "allow_accel_sequence": false, 01:08:17.923 "rdma_max_cq_size": 0, 01:08:17.923 "rdma_cm_event_timeout_ms": 0, 01:08:17.923 "dhchap_digests": [ 01:08:17.923 "sha256", 01:08:17.923 "sha384", 01:08:17.923 "sha512" 01:08:17.923 ], 01:08:17.923 "dhchap_dhgroups": [ 01:08:17.923 "null", 01:08:17.923 "ffdhe2048", 01:08:17.923 "ffdhe3072", 01:08:17.923 "ffdhe4096", 01:08:17.923 "ffdhe6144", 01:08:17.923 "ffdhe8192" 01:08:17.923 ] 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "bdev_nvme_set_hotplug", 01:08:17.923 "params": { 01:08:17.923 "period_us": 100000, 01:08:17.923 "enable": false 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "bdev_malloc_create", 01:08:17.923 "params": { 01:08:17.923 "name": "malloc0", 01:08:17.923 "num_blocks": 8192, 01:08:17.923 "block_size": 4096, 01:08:17.923 "physical_block_size": 4096, 01:08:17.923 "uuid": "f9059cbb-e85d-43f4-8b1b-afe9826606b7", 01:08:17.923 "optimal_io_boundary": 0, 01:08:17.923 "md_size": 0, 01:08:17.923 "dif_type": 0, 01:08:17.923 "dif_is_head_of_md": false, 01:08:17.923 "dif_pi_format": 0 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "bdev_wait_for_examine" 01:08:17.923 } 01:08:17.923 ] 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "subsystem": "nbd", 01:08:17.923 "config": [] 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "subsystem": "scheduler", 01:08:17.923 "config": [ 01:08:17.923 { 01:08:17.923 "method": "framework_set_scheduler", 01:08:17.923 "params": { 01:08:17.923 "name": "static" 01:08:17.923 } 01:08:17.923 } 01:08:17.923 ] 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "subsystem": "nvmf", 01:08:17.923 "config": [ 01:08:17.923 { 01:08:17.923 "method": "nvmf_set_config", 01:08:17.923 "params": { 01:08:17.923 "discovery_filter": "match_any", 01:08:17.923 "admin_cmd_passthru": { 01:08:17.923 "identify_ctrlr": false 01:08:17.923 }, 01:08:17.923 "dhchap_digests": [ 01:08:17.923 "sha256", 01:08:17.923 "sha384", 01:08:17.923 "sha512" 01:08:17.923 ], 01:08:17.923 "dhchap_dhgroups": [ 01:08:17.923 "null", 01:08:17.923 "ffdhe2048", 01:08:17.923 "ffdhe3072", 01:08:17.923 "ffdhe4096", 01:08:17.923 "ffdhe6144", 01:08:17.923 "ffdhe8192" 01:08:17.923 ] 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "nvmf_set_max_subsystems", 01:08:17.923 "params": { 01:08:17.923 "max_subsystems": 1024 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "nvmf_set_crdt", 01:08:17.923 "params": { 01:08:17.923 "crdt1": 0, 01:08:17.923 "crdt2": 0, 01:08:17.923 "crdt3": 0 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "nvmf_create_transport", 01:08:17.923 "params": { 01:08:17.923 "trtype": "TCP", 01:08:17.923 "max_queue_depth": 128, 01:08:17.923 "max_io_qpairs_per_ctrlr": 127, 01:08:17.923 "in_capsule_data_size": 4096, 01:08:17.923 "max_io_size": 131072, 01:08:17.923 "io_unit_size": 131072, 01:08:17.923 "max_aq_depth": 128, 01:08:17.923 "num_shared_buffers": 511, 01:08:17.923 "buf_cache_size": 4294967295, 01:08:17.923 "dif_insert_or_strip": false, 01:08:17.923 "zcopy": false, 01:08:17.923 "c2h_success": false, 01:08:17.923 "sock_priority": 0, 01:08:17.923 "abort_timeout_sec": 1, 01:08:17.923 "ack_timeout": 0, 01:08:17.923 "data_wr_pool_size": 0 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "nvmf_create_subsystem", 01:08:17.923 "params": { 01:08:17.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:17.923 "allow_any_host": false, 01:08:17.923 "serial_number": "00000000000000000000", 01:08:17.923 "model_number": "SPDK bdev Controller", 01:08:17.923 "max_namespaces": 32, 01:08:17.923 "min_cntlid": 1, 01:08:17.923 "max_cntlid": 65519, 01:08:17.923 "ana_reporting": false 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "nvmf_subsystem_add_host", 01:08:17.923 "params": { 01:08:17.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:17.923 "host": "nqn.2016-06.io.spdk:host1", 01:08:17.923 "psk": "key0" 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "nvmf_subsystem_add_ns", 01:08:17.923 "params": { 01:08:17.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:17.923 "namespace": { 01:08:17.923 "nsid": 1, 01:08:17.923 "bdev_name": "malloc0", 01:08:17.923 "nguid": "F9059CBBE85D43F48B1BAFE9826606B7", 01:08:17.923 "uuid": "f9059cbb-e85d-43f4-8b1b-afe9826606b7", 01:08:17.923 "no_auto_visible": false 01:08:17.923 } 01:08:17.923 } 01:08:17.923 }, 01:08:17.923 { 01:08:17.923 "method": "nvmf_subsystem_add_listener", 01:08:17.923 "params": { 01:08:17.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:17.923 "listen_address": { 01:08:17.923 "trtype": "TCP", 01:08:17.923 "adrfam": "IPv4", 01:08:17.923 "traddr": "10.0.0.3", 01:08:17.923 "trsvcid": "4420" 01:08:17.923 }, 01:08:17.923 "secure_channel": false, 01:08:17.923 "sock_impl": "ssl" 01:08:17.923 } 01:08:17.923 } 01:08:17.923 ] 01:08:17.923 } 01:08:17.923 ] 01:08:17.923 }' 01:08:17.923 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 01:08:17.923 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71934 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71934 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71934 ']' 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:17.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:17.924 06:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:17.924 [2024-12-09 06:07:12.492424] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:17.924 [2024-12-09 06:07:12.492528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:18.184 [2024-12-09 06:07:12.651146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:18.184 [2024-12-09 06:07:12.688484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:18.184 [2024-12-09 06:07:12.688527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:18.184 [2024-12-09 06:07:12.688536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:18.184 [2024-12-09 06:07:12.688544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:18.184 [2024-12-09 06:07:12.688550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:18.184 [2024-12-09 06:07:12.688832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:18.444 [2024-12-09 06:07:12.843449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:18.444 [2024-12-09 06:07:12.913468] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:18.444 [2024-12-09 06:07:12.945374] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:08:18.444 [2024-12-09 06:07:12.945550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:19.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=71962 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 71962 /var/tmp/bdevperf.sock 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71962 ']' 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 01:08:19.022 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 01:08:19.022 "subsystems": [ 01:08:19.022 { 01:08:19.022 "subsystem": "keyring", 01:08:19.022 "config": [ 01:08:19.022 { 01:08:19.022 "method": "keyring_file_add_key", 01:08:19.022 "params": { 01:08:19.022 "name": "key0", 01:08:19.022 "path": "/tmp/tmp.zgX4aXhYc3" 01:08:19.022 } 01:08:19.022 } 01:08:19.022 ] 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "subsystem": "iobuf", 01:08:19.022 "config": [ 01:08:19.022 { 01:08:19.022 "method": "iobuf_set_options", 01:08:19.022 "params": { 01:08:19.022 "small_pool_count": 8192, 01:08:19.022 "large_pool_count": 1024, 01:08:19.022 "small_bufsize": 8192, 01:08:19.022 "large_bufsize": 135168, 01:08:19.022 "enable_numa": false 01:08:19.022 } 01:08:19.022 } 01:08:19.022 ] 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "subsystem": "sock", 01:08:19.022 "config": [ 01:08:19.022 { 01:08:19.022 "method": "sock_set_default_impl", 01:08:19.022 "params": { 01:08:19.022 "impl_name": "uring" 01:08:19.022 } 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "method": "sock_impl_set_options", 01:08:19.022 "params": { 01:08:19.022 "impl_name": "ssl", 01:08:19.022 "recv_buf_size": 4096, 01:08:19.022 "send_buf_size": 4096, 01:08:19.022 "enable_recv_pipe": true, 01:08:19.022 "enable_quickack": false, 01:08:19.022 "enable_placement_id": 0, 01:08:19.022 "enable_zerocopy_send_server": true, 01:08:19.022 "enable_zerocopy_send_client": false, 01:08:19.022 "zerocopy_threshold": 0, 01:08:19.022 "tls_version": 0, 01:08:19.022 "enable_ktls": false 01:08:19.022 } 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "method": "sock_impl_set_options", 01:08:19.022 "params": { 01:08:19.022 "impl_name": "posix", 01:08:19.022 "recv_buf_size": 2097152, 01:08:19.022 "send_buf_size": 2097152, 01:08:19.022 "enable_recv_pipe": true, 01:08:19.022 "enable_quickack": false, 01:08:19.022 "enable_placement_id": 0, 01:08:19.022 "enable_zerocopy_send_server": true, 01:08:19.022 "enable_zerocopy_send_client": false, 01:08:19.022 "zerocopy_threshold": 0, 01:08:19.022 "tls_version": 0, 01:08:19.022 "enable_ktls": false 01:08:19.022 } 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "method": "sock_impl_set_options", 01:08:19.022 "params": { 01:08:19.022 "impl_name": "uring", 01:08:19.022 "recv_buf_size": 2097152, 01:08:19.022 "send_buf_size": 2097152, 01:08:19.022 "enable_recv_pipe": true, 01:08:19.022 "enable_quickack": false, 01:08:19.022 "enable_placement_id": 0, 01:08:19.022 "enable_zerocopy_send_server": false, 01:08:19.022 "enable_zerocopy_send_client": false, 01:08:19.022 "zerocopy_threshold": 0, 01:08:19.022 "tls_version": 0, 01:08:19.022 "enable_ktls": false 01:08:19.022 } 01:08:19.022 } 01:08:19.022 ] 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "subsystem": "vmd", 01:08:19.022 "config": [] 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "subsystem": "accel", 01:08:19.022 "config": [ 01:08:19.022 { 01:08:19.022 "method": "accel_set_options", 01:08:19.022 "params": { 01:08:19.022 "small_cache_size": 128, 01:08:19.022 "large_cache_size": 16, 01:08:19.022 "task_count": 2048, 01:08:19.022 "sequence_count": 2048, 01:08:19.022 "buf_count": 2048 01:08:19.022 } 01:08:19.022 } 01:08:19.022 ] 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "subsystem": "bdev", 01:08:19.022 "config": [ 01:08:19.022 { 01:08:19.022 "method": "bdev_set_options", 01:08:19.022 "params": { 01:08:19.022 "bdev_io_pool_size": 65535, 01:08:19.022 "bdev_io_cache_size": 256, 01:08:19.022 "bdev_auto_examine": true, 01:08:19.022 "iobuf_small_cache_size": 128, 01:08:19.022 "iobuf_large_cache_size": 16 01:08:19.022 } 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "method": "bdev_raid_set_options", 01:08:19.022 "params": { 01:08:19.022 "process_window_size_kb": 1024, 01:08:19.022 "process_max_bandwidth_mb_sec": 0 01:08:19.022 } 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "method": "bdev_iscsi_set_options", 01:08:19.022 "params": { 01:08:19.022 "timeout_sec": 30 01:08:19.022 } 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "method": "bdev_nvme_set_options", 01:08:19.022 "params": { 01:08:19.022 "action_on_timeout": "none", 01:08:19.022 "timeout_us": 0, 01:08:19.022 "timeout_admin_us": 0, 01:08:19.022 "keep_alive_timeout_ms": 10000, 01:08:19.022 "arbitration_burst": 0, 01:08:19.022 "low_priority_weight": 0, 01:08:19.022 "medium_priority_weight": 0, 01:08:19.022 "high_priority_weight": 0, 01:08:19.022 "nvme_adminq_poll_period_us": 10000, 01:08:19.022 "nvme_ioq_poll_period_us": 0, 01:08:19.022 "io_queue_requests": 512, 01:08:19.022 "delay_cmd_submit": true, 01:08:19.022 "transport_retry_count": 4, 01:08:19.022 "bdev_retry_count": 3, 01:08:19.022 "transport_ack_timeout": 0, 01:08:19.022 "ctrlr_loss_timeout_sec": 0, 01:08:19.022 "reconnect_delay_sec": 0, 01:08:19.022 "fast_io_fail_timeout_sec": 0, 01:08:19.022 "disable_auto_failback": false, 01:08:19.022 "generate_uuids": false, 01:08:19.022 "transport_tos": 0, 01:08:19.022 "nvme_error_stat": false, 01:08:19.022 "rdma_srq_size": 0, 01:08:19.022 "io_path_stat": false, 01:08:19.022 "allow_accel_sequence": false, 01:08:19.022 "rdma_max_cq_size": 0, 01:08:19.022 "rdma_cm_event_timeout_ms": 0, 01:08:19.022 "dhchap_digests": [ 01:08:19.022 "sha256", 01:08:19.022 "sha384", 01:08:19.022 "sha512" 01:08:19.022 ], 01:08:19.022 "dhchap_dhgroups": [ 01:08:19.022 "null", 01:08:19.022 "ffdhe2048", 01:08:19.022 "ffdhe3072", 01:08:19.022 "ffdhe4096", 01:08:19.022 "ffdhe6144", 01:08:19.022 "ffdhe8192" 01:08:19.022 ] 01:08:19.022 } 01:08:19.022 }, 01:08:19.022 { 01:08:19.022 "method": "bdev_nvme_attach_controller", 01:08:19.022 "params": { 01:08:19.022 "name": "nvme0", 01:08:19.022 "trtype": "TCP", 01:08:19.022 "adrfam": "IPv4", 01:08:19.022 "traddr": "10.0.0.3", 01:08:19.022 "trsvcid": "4420", 01:08:19.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:08:19.022 "prchk_reftag": false, 01:08:19.023 "prchk_guard": false, 01:08:19.023 "ctrlr_loss_timeout_sec": 0, 01:08:19.023 "reconnect_delay_sec": 0, 01:08:19.023 "fast_io_fail_timeout_sec": 0, 01:08:19.023 "psk": "key0", 01:08:19.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:08:19.023 "hdgst": false, 01:08:19.023 "ddgst": false, 01:08:19.023 "multipath": "multipath" 01:08:19.023 } 01:08:19.023 }, 01:08:19.023 { 01:08:19.023 "method": "bdev_nvme_set_hotplug", 01:08:19.023 "params": { 01:08:19.023 "period_us": 100000, 01:08:19.023 "enable": false 01:08:19.023 } 01:08:19.023 }, 01:08:19.023 { 01:08:19.023 "method": "bdev_enable_histogram", 01:08:19.023 "params": { 01:08:19.023 "name": "nvme0n1", 01:08:19.023 "enable": true 01:08:19.023 } 01:08:19.023 }, 01:08:19.023 { 01:08:19.023 "method": "bdev_wait_for_examine" 01:08:19.023 } 01:08:19.023 ] 01:08:19.023 }, 01:08:19.023 { 01:08:19.023 "subsystem": "nbd", 01:08:19.023 "config": [] 01:08:19.023 } 01:08:19.023 ] 01:08:19.023 }' 01:08:19.023 [2024-12-09 06:07:13.486847] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:19.023 [2024-12-09 06:07:13.487108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71962 ] 01:08:19.283 [2024-12-09 06:07:13.642137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:19.283 [2024-12-09 06:07:13.698595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:19.283 [2024-12-09 06:07:13.849956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:19.543 [2024-12-09 06:07:13.911671] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:08:19.802 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:19.802 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:08:19.802 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:08:19.802 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 01:08:20.062 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:08:20.062 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:20.062 Running I/O for 1 seconds... 01:08:21.446 5648.00 IOPS, 22.06 MiB/s 01:08:21.446 Latency(us) 01:08:21.446 [2024-12-09T06:07:16.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:21.446 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:08:21.446 Verification LBA range: start 0x0 length 0x2000 01:08:21.446 nvme0n1 : 1.01 5703.91 22.28 0.00 0.00 22280.95 4658.58 17476.27 01:08:21.446 [2024-12-09T06:07:16.033Z] =================================================================================================================== 01:08:21.446 [2024-12-09T06:07:16.033Z] Total : 5703.91 22.28 0.00 0.00 22280.95 4658.58 17476.27 01:08:21.446 { 01:08:21.446 "results": [ 01:08:21.446 { 01:08:21.446 "job": "nvme0n1", 01:08:21.446 "core_mask": "0x2", 01:08:21.446 "workload": "verify", 01:08:21.446 "status": "finished", 01:08:21.446 "verify_range": { 01:08:21.446 "start": 0, 01:08:21.446 "length": 8192 01:08:21.446 }, 01:08:21.446 "queue_depth": 128, 01:08:21.446 "io_size": 4096, 01:08:21.446 "runtime": 1.012639, 01:08:21.446 "iops": 5703.908302958903, 01:08:21.446 "mibps": 22.280891808433214, 01:08:21.446 "io_failed": 0, 01:08:21.446 "io_timeout": 0, 01:08:21.446 "avg_latency_us": 22280.94771106587, 01:08:21.446 "min_latency_us": 4658.583132530121, 01:08:21.446 "max_latency_us": 17476.266666666666 01:08:21.446 } 01:08:21.446 ], 01:08:21.446 "core_count": 1 01:08:21.446 } 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:08:21.446 nvmf_trace.0 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 71962 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71962 ']' 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71962 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71962 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:08:21.446 killing process with pid 71962 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71962' 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71962 01:08:21.446 Received shutdown signal, test time was about 1.000000 seconds 01:08:21.446 01:08:21.446 Latency(us) 01:08:21.446 [2024-12-09T06:07:16.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:21.446 [2024-12-09T06:07:16.033Z] =================================================================================================================== 01:08:21.446 [2024-12-09T06:07:16.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:08:21.446 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71962 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:08:21.705 rmmod nvme_tcp 01:08:21.705 rmmod nvme_fabrics 01:08:21.705 rmmod nvme_keyring 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 71934 ']' 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 71934 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71934 ']' 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71934 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71934 01:08:21.705 killing process with pid 71934 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71934' 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71934 01:08:21.705 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71934 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:08:21.964 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:08:21.965 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:08:21.965 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:08:21.965 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3IbFm29waC /tmp/tmp.Y1JHo4z6jh /tmp/tmp.zgX4aXhYc3 01:08:22.224 01:08:22.224 real 1m24.747s 01:08:22.224 user 2m3.530s 01:08:22.224 sys 0m34.106s 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:08:22.224 ************************************ 01:08:22.224 END TEST nvmf_tls 01:08:22.224 ************************************ 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:22.224 06:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:08:22.484 ************************************ 01:08:22.484 START TEST nvmf_fips 01:08:22.484 ************************************ 01:08:22.484 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:08:22.484 * Looking for test storage... 01:08:22.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 01:08:22.484 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:22.484 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 01:08:22.484 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:22.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:22.484 --rc genhtml_branch_coverage=1 01:08:22.484 --rc genhtml_function_coverage=1 01:08:22.484 --rc genhtml_legend=1 01:08:22.484 --rc geninfo_all_blocks=1 01:08:22.484 --rc geninfo_unexecuted_blocks=1 01:08:22.484 01:08:22.484 ' 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:22.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:22.484 --rc genhtml_branch_coverage=1 01:08:22.484 --rc genhtml_function_coverage=1 01:08:22.484 --rc genhtml_legend=1 01:08:22.484 --rc geninfo_all_blocks=1 01:08:22.484 --rc geninfo_unexecuted_blocks=1 01:08:22.484 01:08:22.484 ' 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:22.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:22.484 --rc genhtml_branch_coverage=1 01:08:22.484 --rc genhtml_function_coverage=1 01:08:22.484 --rc genhtml_legend=1 01:08:22.484 --rc geninfo_all_blocks=1 01:08:22.484 --rc geninfo_unexecuted_blocks=1 01:08:22.484 01:08:22.484 ' 01:08:22.484 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:22.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:22.484 --rc genhtml_branch_coverage=1 01:08:22.484 --rc genhtml_function_coverage=1 01:08:22.485 --rc genhtml_legend=1 01:08:22.485 --rc geninfo_all_blocks=1 01:08:22.485 --rc geninfo_unexecuted_blocks=1 01:08:22.485 01:08:22.485 ' 01:08:22.485 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:22.485 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 01:08:22.485 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:22.485 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:22.485 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:22.485 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:22.485 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:22.485 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:22.485 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:22.745 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:08:22.746 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 01:08:22.746 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 01:08:22.746 Error setting digest 01:08:22.747 407282FD167F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 01:08:22.747 407282FD167F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:22.747 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:23.006 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:08:23.007 Cannot find device "nvmf_init_br" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:08:23.007 Cannot find device "nvmf_init_br2" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:08:23.007 Cannot find device "nvmf_tgt_br" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:08:23.007 Cannot find device "nvmf_tgt_br2" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:08:23.007 Cannot find device "nvmf_init_br" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:08:23.007 Cannot find device "nvmf_init_br2" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:08:23.007 Cannot find device "nvmf_tgt_br" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:08:23.007 Cannot find device "nvmf_tgt_br2" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:08:23.007 Cannot find device "nvmf_br" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:08:23.007 Cannot find device "nvmf_init_if" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:08:23.007 Cannot find device "nvmf_init_if2" 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:23.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:23.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:08:23.007 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:08:23.266 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:23.267 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:08:23.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:23.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 01:08:23.527 01:08:23.527 --- 10.0.0.3 ping statistics --- 01:08:23.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:23.527 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:08:23.527 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:08:23.527 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 01:08:23.527 01:08:23.527 --- 10.0.0.4 ping statistics --- 01:08:23.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:23.527 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:23.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:23.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:08:23.527 01:08:23.527 --- 10.0.0.1 ping statistics --- 01:08:23.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:23.527 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:08:23.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:23.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 01:08:23.527 01:08:23.527 --- 10.0.0.2 ping statistics --- 01:08:23.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:23.527 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72278 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72278 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72278 ']' 01:08:23.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:23.527 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:08:23.527 [2024-12-09 06:07:17.987507] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:23.527 [2024-12-09 06:07:17.987571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:23.787 [2024-12-09 06:07:18.140188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:23.787 [2024-12-09 06:07:18.199453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:23.787 [2024-12-09 06:07:18.199502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:23.787 [2024-12-09 06:07:18.199512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:23.787 [2024-12-09 06:07:18.199519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:23.787 [2024-12-09 06:07:18.199526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:23.787 [2024-12-09 06:07:18.199880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:23.787 [2024-12-09 06:07:18.276316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.v0V 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.v0V 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.v0V 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.v0V 01:08:24.356 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:24.616 [2024-12-09 06:07:19.075515] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:24.616 [2024-12-09 06:07:19.091432] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:08:24.616 [2024-12-09 06:07:19.091635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:24.616 malloc0 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72318 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72318 /var/tmp/bdevperf.sock 01:08:24.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72318 ']' 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:24.616 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:08:24.876 [2024-12-09 06:07:19.235707] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:24.876 [2024-12-09 06:07:19.235771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72318 ] 01:08:24.876 [2024-12-09 06:07:19.386477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:24.876 [2024-12-09 06:07:19.445242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:08:25.134 [2024-12-09 06:07:19.517222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:25.700 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:25.701 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 01:08:25.701 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.v0V 01:08:25.701 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:08:25.958 [2024-12-09 06:07:20.457082] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:08:25.958 TLSTESTn1 01:08:26.216 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:26.217 Running I/O for 10 seconds... 01:08:28.086 5643.00 IOPS, 22.04 MiB/s [2024-12-09T06:07:24.051Z] 5645.50 IOPS, 22.05 MiB/s [2024-12-09T06:07:24.989Z] 5669.67 IOPS, 22.15 MiB/s [2024-12-09T06:07:25.925Z] 5891.00 IOPS, 23.01 MiB/s [2024-12-09T06:07:26.860Z] 6105.00 IOPS, 23.85 MiB/s [2024-12-09T06:07:27.869Z] 6244.83 IOPS, 24.39 MiB/s [2024-12-09T06:07:28.808Z] 6342.14 IOPS, 24.77 MiB/s [2024-12-09T06:07:29.746Z] 6417.00 IOPS, 25.07 MiB/s [2024-12-09T06:07:30.683Z] 6473.00 IOPS, 25.29 MiB/s [2024-12-09T06:07:30.683Z] 6515.90 IOPS, 25.45 MiB/s 01:08:36.096 Latency(us) 01:08:36.096 [2024-12-09T06:07:30.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:36.096 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:08:36.096 Verification LBA range: start 0x0 length 0x2000 01:08:36.096 TLSTESTn1 : 10.01 6522.67 25.48 0.00 0.00 19596.18 2566.17 18529.05 01:08:36.096 [2024-12-09T06:07:30.683Z] =================================================================================================================== 01:08:36.096 [2024-12-09T06:07:30.683Z] Total : 6522.67 25.48 0.00 0.00 19596.18 2566.17 18529.05 01:08:36.096 { 01:08:36.096 "results": [ 01:08:36.096 { 01:08:36.096 "job": "TLSTESTn1", 01:08:36.096 "core_mask": "0x4", 01:08:36.096 "workload": "verify", 01:08:36.096 "status": "finished", 01:08:36.096 "verify_range": { 01:08:36.096 "start": 0, 01:08:36.096 "length": 8192 01:08:36.096 }, 01:08:36.096 "queue_depth": 128, 01:08:36.096 "io_size": 4096, 01:08:36.096 "runtime": 10.008945, 01:08:36.096 "iops": 6522.6654757319575, 01:08:36.096 "mibps": 25.47916201457796, 01:08:36.096 "io_failed": 0, 01:08:36.096 "io_timeout": 0, 01:08:36.096 "avg_latency_us": 19596.177196296867, 01:08:36.096 "min_latency_us": 2566.1686746987953, 01:08:36.096 "max_latency_us": 18529.053815261042 01:08:36.096 } 01:08:36.096 ], 01:08:36.096 "core_count": 1 01:08:36.096 } 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 01:08:36.096 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:08:36.096 nvmf_trace.0 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72318 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72318 ']' 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72318 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72318 01:08:36.356 killing process with pid 72318 01:08:36.356 Received shutdown signal, test time was about 10.000000 seconds 01:08:36.356 01:08:36.356 Latency(us) 01:08:36.356 [2024-12-09T06:07:30.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:36.356 [2024-12-09T06:07:30.943Z] =================================================================================================================== 01:08:36.356 [2024-12-09T06:07:30.943Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72318' 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72318 01:08:36.356 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72318 01:08:36.615 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 01:08:36.615 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 01:08:36.615 06:07:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:08:36.615 rmmod nvme_tcp 01:08:36.615 rmmod nvme_fabrics 01:08:36.615 rmmod nvme_keyring 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72278 ']' 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72278 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72278 ']' 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72278 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72278 01:08:36.615 killing process with pid 72278 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:08:36.615 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72278' 01:08:36.616 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72278 01:08:36.616 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72278 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:08:36.886 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:37.144 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:37.402 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 01:08:37.402 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.v0V 01:08:37.402 ************************************ 01:08:37.402 END TEST nvmf_fips 01:08:37.403 ************************************ 01:08:37.403 01:08:37.403 real 0m14.924s 01:08:37.403 user 0m17.882s 01:08:37.403 sys 0m7.224s 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:08:37.403 ************************************ 01:08:37.403 START TEST nvmf_control_msg_list 01:08:37.403 ************************************ 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 01:08:37.403 * Looking for test storage... 01:08:37.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 01:08:37.403 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 01:08:37.665 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:37.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:37.666 --rc genhtml_branch_coverage=1 01:08:37.666 --rc genhtml_function_coverage=1 01:08:37.666 --rc genhtml_legend=1 01:08:37.666 --rc geninfo_all_blocks=1 01:08:37.666 --rc geninfo_unexecuted_blocks=1 01:08:37.666 01:08:37.666 ' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:37.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:37.666 --rc genhtml_branch_coverage=1 01:08:37.666 --rc genhtml_function_coverage=1 01:08:37.666 --rc genhtml_legend=1 01:08:37.666 --rc geninfo_all_blocks=1 01:08:37.666 --rc geninfo_unexecuted_blocks=1 01:08:37.666 01:08:37.666 ' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:37.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:37.666 --rc genhtml_branch_coverage=1 01:08:37.666 --rc genhtml_function_coverage=1 01:08:37.666 --rc genhtml_legend=1 01:08:37.666 --rc geninfo_all_blocks=1 01:08:37.666 --rc geninfo_unexecuted_blocks=1 01:08:37.666 01:08:37.666 ' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:37.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:37.666 --rc genhtml_branch_coverage=1 01:08:37.666 --rc genhtml_function_coverage=1 01:08:37.666 --rc genhtml_legend=1 01:08:37.666 --rc geninfo_all_blocks=1 01:08:37.666 --rc geninfo_unexecuted_blocks=1 01:08:37.666 01:08:37.666 ' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:08:37.666 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:08:37.666 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:08:37.667 Cannot find device "nvmf_init_br" 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:08:37.667 Cannot find device "nvmf_init_br2" 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:08:37.667 Cannot find device "nvmf_tgt_br" 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:08:37.667 Cannot find device "nvmf_tgt_br2" 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:08:37.667 Cannot find device "nvmf_init_br" 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:08:37.667 Cannot find device "nvmf_init_br2" 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 01:08:37.667 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:08:37.926 Cannot find device "nvmf_tgt_br" 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:08:37.926 Cannot find device "nvmf_tgt_br2" 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:08:37.926 Cannot find device "nvmf_br" 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:08:37.926 Cannot find device "nvmf_init_if" 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:08:37.926 Cannot find device "nvmf_init_if2" 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:37.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:37.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:08:37.926 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:08:38.185 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:38.185 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 01:08:38.185 01:08:38.185 --- 10.0.0.3 ping statistics --- 01:08:38.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:38.185 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:08:38.185 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:08:38.185 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 01:08:38.185 01:08:38.185 --- 10.0.0.4 ping statistics --- 01:08:38.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:38.185 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:38.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:38.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 01:08:38.185 01:08:38.185 --- 10.0.0.1 ping statistics --- 01:08:38.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:38.185 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:08:38.185 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:08:38.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:38.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 01:08:38.186 01:08:38.186 --- 10.0.0.2 ping statistics --- 01:08:38.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:38.186 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72714 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72714 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 72714 ']' 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:38.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:38.186 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:08:38.186 [2024-12-09 06:07:32.708553] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:38.186 [2024-12-09 06:07:32.708612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:38.445 [2024-12-09 06:07:32.861488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:38.445 [2024-12-09 06:07:32.899444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:38.445 [2024-12-09 06:07:32.899490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:38.445 [2024-12-09 06:07:32.899500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:38.445 [2024-12-09 06:07:32.899508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:38.445 [2024-12-09 06:07:32.899515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:38.445 [2024-12-09 06:07:32.899764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:38.445 [2024-12-09 06:07:32.941250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:39.012 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:39.012 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 01:08:39.012 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:39.012 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:39.012 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:08:39.272 [2024-12-09 06:07:33.622039] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:08:39.272 Malloc0 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:08:39.272 [2024-12-09 06:07:33.674810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=72746 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=72747 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=72748 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:08:39.272 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 72746 01:08:39.532 [2024-12-09 06:07:33.885012] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:08:39.532 [2024-12-09 06:07:33.895286] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:08:39.532 [2024-12-09 06:07:33.895434] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:08:40.472 Initializing NVMe Controllers 01:08:40.472 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:08:40.472 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 01:08:40.472 Initialization complete. Launching workers. 01:08:40.472 ======================================================== 01:08:40.472 Latency(us) 01:08:40.472 Device Information : IOPS MiB/s Average min max 01:08:40.472 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4746.00 18.54 210.47 91.45 1464.40 01:08:40.472 ======================================================== 01:08:40.472 Total : 4746.00 18.54 210.47 91.45 1464.40 01:08:40.472 01:08:40.472 Initializing NVMe Controllers 01:08:40.472 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:08:40.472 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 01:08:40.472 Initialization complete. Launching workers. 01:08:40.472 ======================================================== 01:08:40.472 Latency(us) 01:08:40.472 Device Information : IOPS MiB/s Average min max 01:08:40.472 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4742.00 18.52 210.65 114.37 1467.40 01:08:40.472 ======================================================== 01:08:40.472 Total : 4742.00 18.52 210.65 114.37 1467.40 01:08:40.472 01:08:40.472 Initializing NVMe Controllers 01:08:40.472 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:08:40.472 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 01:08:40.472 Initialization complete. Launching workers. 01:08:40.472 ======================================================== 01:08:40.472 Latency(us) 01:08:40.472 Device Information : IOPS MiB/s Average min max 01:08:40.472 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4727.00 18.46 211.30 135.29 1466.80 01:08:40.472 ======================================================== 01:08:40.472 Total : 4727.00 18.46 211.30 135.29 1466.80 01:08:40.472 01:08:40.472 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 72747 01:08:40.472 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 72748 01:08:40.472 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:08:40.472 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 01:08:40.472 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 01:08:40.472 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 01:08:40.472 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:08:40.472 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 01:08:40.472 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 01:08:40.473 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:08:40.473 rmmod nvme_tcp 01:08:40.473 rmmod nvme_fabrics 01:08:40.473 rmmod nvme_keyring 01:08:40.473 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72714 ']' 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72714 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 72714 ']' 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 72714 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72714 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72714' 01:08:40.735 killing process with pid 72714 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 72714 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 72714 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:08:40.735 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:40.993 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:41.252 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 01:08:41.252 01:08:41.252 real 0m3.773s 01:08:41.252 user 0m5.179s 01:08:41.252 sys 0m1.935s 01:08:41.252 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:41.252 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:08:41.252 ************************************ 01:08:41.252 END TEST nvmf_control_msg_list 01:08:41.252 ************************************ 01:08:41.253 06:07:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 01:08:41.253 06:07:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:08:41.253 06:07:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:41.253 06:07:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:08:41.253 ************************************ 01:08:41.253 START TEST nvmf_wait_for_buf 01:08:41.253 ************************************ 01:08:41.253 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 01:08:41.253 * Looking for test storage... 01:08:41.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:41.253 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:41.253 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 01:08:41.253 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:41.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:41.512 --rc genhtml_branch_coverage=1 01:08:41.512 --rc genhtml_function_coverage=1 01:08:41.512 --rc genhtml_legend=1 01:08:41.512 --rc geninfo_all_blocks=1 01:08:41.512 --rc geninfo_unexecuted_blocks=1 01:08:41.512 01:08:41.512 ' 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:41.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:41.512 --rc genhtml_branch_coverage=1 01:08:41.512 --rc genhtml_function_coverage=1 01:08:41.512 --rc genhtml_legend=1 01:08:41.512 --rc geninfo_all_blocks=1 01:08:41.512 --rc geninfo_unexecuted_blocks=1 01:08:41.512 01:08:41.512 ' 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:41.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:41.512 --rc genhtml_branch_coverage=1 01:08:41.512 --rc genhtml_function_coverage=1 01:08:41.512 --rc genhtml_legend=1 01:08:41.512 --rc geninfo_all_blocks=1 01:08:41.512 --rc geninfo_unexecuted_blocks=1 01:08:41.512 01:08:41.512 ' 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:41.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:41.512 --rc genhtml_branch_coverage=1 01:08:41.512 --rc genhtml_function_coverage=1 01:08:41.512 --rc genhtml_legend=1 01:08:41.512 --rc geninfo_all_blocks=1 01:08:41.512 --rc geninfo_unexecuted_blocks=1 01:08:41.512 01:08:41.512 ' 01:08:41.512 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:08:41.513 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:08:41.513 Cannot find device "nvmf_init_br" 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 01:08:41.513 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:08:41.513 Cannot find device "nvmf_init_br2" 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:08:41.513 Cannot find device "nvmf_tgt_br" 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:08:41.513 Cannot find device "nvmf_tgt_br2" 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:08:41.513 Cannot find device "nvmf_init_br" 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:08:41.513 Cannot find device "nvmf_init_br2" 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 01:08:41.513 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:08:41.773 Cannot find device "nvmf_tgt_br" 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:08:41.773 Cannot find device "nvmf_tgt_br2" 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:08:41.773 Cannot find device "nvmf_br" 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:08:41.773 Cannot find device "nvmf_init_if" 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:08:41.773 Cannot find device "nvmf_init_if2" 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:41.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:41.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:08:41.773 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:08:42.032 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:42.032 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 01:08:42.032 01:08:42.032 --- 10.0.0.3 ping statistics --- 01:08:42.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:42.032 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:08:42.032 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:08:42.032 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 01:08:42.032 01:08:42.032 --- 10.0.0.4 ping statistics --- 01:08:42.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:42.032 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:08:42.032 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:42.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:42.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 01:08:42.033 01:08:42.033 --- 10.0.0.1 ping statistics --- 01:08:42.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:42.033 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:08:42.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:42.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 01:08:42.033 01:08:42.033 --- 10.0.0.2 ping statistics --- 01:08:42.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:42.033 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=72984 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 72984 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 72984 ']' 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:42.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:42.033 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:42.291 [2024-12-09 06:07:36.637870] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:42.291 [2024-12-09 06:07:36.637930] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:42.291 [2024-12-09 06:07:36.773522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:42.291 [2024-12-09 06:07:36.814551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:42.291 [2024-12-09 06:07:36.814595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:42.291 [2024-12-09 06:07:36.814605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:42.291 [2024-12-09 06:07:36.814613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:42.291 [2024-12-09 06:07:36.814620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:42.291 [2024-12-09 06:07:36.814870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:43.228 [2024-12-09 06:07:37.620945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:43.228 Malloc0 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:43.228 [2024-12-09 06:07:37.678615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:43.228 [2024-12-09 06:07:37.710666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:43.228 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:08:43.487 [2024-12-09 06:07:37.900173] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:08:44.866 Initializing NVMe Controllers 01:08:44.866 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:08:44.866 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 01:08:44.866 Initialization complete. Launching workers. 01:08:44.866 ======================================================== 01:08:44.866 Latency(us) 01:08:44.866 Device Information : IOPS MiB/s Average min max 01:08:44.866 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.19 62.40 8013.38 5014.76 11006.50 01:08:44.866 ======================================================== 01:08:44.866 Total : 499.19 62.40 8013.38 5014.76 11006.50 01:08:44.866 01:08:44.866 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 01:08:44.866 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:08:44.867 rmmod nvme_tcp 01:08:44.867 rmmod nvme_fabrics 01:08:44.867 rmmod nvme_keyring 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 72984 ']' 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 72984 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 72984 ']' 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 72984 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72984 01:08:44.867 killing process with pid 72984 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72984' 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 72984 01:08:44.867 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 72984 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:08:45.126 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 01:08:45.385 01:08:45.385 real 0m4.183s 01:08:45.385 user 0m3.269s 01:08:45.385 sys 0m1.179s 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:08:45.385 ************************************ 01:08:45.385 END TEST nvmf_wait_for_buf 01:08:45.385 ************************************ 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:08:45.385 ************************************ 01:08:45.385 START TEST nvmf_nsid 01:08:45.385 ************************************ 01:08:45.385 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 01:08:45.645 * Looking for test storage... 01:08:45.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:45.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:45.645 --rc genhtml_branch_coverage=1 01:08:45.645 --rc genhtml_function_coverage=1 01:08:45.645 --rc genhtml_legend=1 01:08:45.645 --rc geninfo_all_blocks=1 01:08:45.645 --rc geninfo_unexecuted_blocks=1 01:08:45.645 01:08:45.645 ' 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:45.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:45.645 --rc genhtml_branch_coverage=1 01:08:45.645 --rc genhtml_function_coverage=1 01:08:45.645 --rc genhtml_legend=1 01:08:45.645 --rc geninfo_all_blocks=1 01:08:45.645 --rc geninfo_unexecuted_blocks=1 01:08:45.645 01:08:45.645 ' 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:45.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:45.645 --rc genhtml_branch_coverage=1 01:08:45.645 --rc genhtml_function_coverage=1 01:08:45.645 --rc genhtml_legend=1 01:08:45.645 --rc geninfo_all_blocks=1 01:08:45.645 --rc geninfo_unexecuted_blocks=1 01:08:45.645 01:08:45.645 ' 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:45.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:45.645 --rc genhtml_branch_coverage=1 01:08:45.645 --rc genhtml_function_coverage=1 01:08:45.645 --rc genhtml_legend=1 01:08:45.645 --rc geninfo_all_blocks=1 01:08:45.645 --rc geninfo_unexecuted_blocks=1 01:08:45.645 01:08:45.645 ' 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:45.645 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:08:45.646 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:08:45.646 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:08:45.905 Cannot find device "nvmf_init_br" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:08:45.905 Cannot find device "nvmf_init_br2" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:08:45.905 Cannot find device "nvmf_tgt_br" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:08:45.905 Cannot find device "nvmf_tgt_br2" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:08:45.905 Cannot find device "nvmf_init_br" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:08:45.905 Cannot find device "nvmf_init_br2" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:08:45.905 Cannot find device "nvmf_tgt_br" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:08:45.905 Cannot find device "nvmf_tgt_br2" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:08:45.905 Cannot find device "nvmf_br" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:08:45.905 Cannot find device "nvmf_init_if" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:08:45.905 Cannot find device "nvmf_init_if2" 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:45.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:45.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:45.905 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:08:46.164 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:08:46.423 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:46.423 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 01:08:46.423 01:08:46.423 --- 10.0.0.3 ping statistics --- 01:08:46.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:46.423 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:08:46.423 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:08:46.423 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 01:08:46.423 01:08:46.423 --- 10.0.0.4 ping statistics --- 01:08:46.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:46.423 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:46.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:46.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 01:08:46.423 01:08:46.423 --- 10.0.0.1 ping statistics --- 01:08:46.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:46.423 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:08:46.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:46.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 01:08:46.423 01:08:46.423 --- 10.0.0.2 ping statistics --- 01:08:46.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:46.423 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73265 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73265 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73265 ']' 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:46.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:46.423 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:08:46.423 [2024-12-09 06:07:40.950270] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:46.423 [2024-12-09 06:07:40.950547] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:46.683 [2024-12-09 06:07:41.101308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:46.683 [2024-12-09 06:07:41.138678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:46.683 [2024-12-09 06:07:41.138727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:46.683 [2024-12-09 06:07:41.138736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:46.683 [2024-12-09 06:07:41.138744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:46.683 [2024-12-09 06:07:41.138751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:46.683 [2024-12-09 06:07:41.138999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:46.683 [2024-12-09 06:07:41.180415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:47.251 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:47.251 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 01:08:47.251 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:47.251 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:47.251 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73297 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=6083dc76-cf1f-47fd-a564-0360fe11f39e 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0b2d56e1-ef70-4526-884c-b12dcc925eaf 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c2be6261-55ad-4321-8c03-1c21c78132ba 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:08:47.510 [2024-12-09 06:07:41.903242] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:47.510 [2024-12-09 06:07:41.903440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73297 ] 01:08:47.510 null0 01:08:47.510 null1 01:08:47.510 null2 01:08:47.510 [2024-12-09 06:07:41.924276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:47.510 [2024-12-09 06:07:41.948328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:47.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73297 /var/tmp/tgt2.sock 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73297 ']' 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:47.510 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:08:47.510 [2024-12-09 06:07:42.055091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:47.769 [2024-12-09 06:07:42.114115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:47.769 [2024-12-09 06:07:42.205614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:48.027 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:48.027 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 01:08:48.027 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 01:08:48.285 [2024-12-09 06:07:42.778244] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:48.285 [2024-12-09 06:07:42.794303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 01:08:48.285 nvme0n1 nvme0n2 01:08:48.285 nvme1n1 01:08:48.285 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 01:08:48.285 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 01:08:48.285 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:48.544 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 01:08:48.544 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 01:08:48.544 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 01:08:48.544 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 01:08:48.544 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 01:08:48.544 06:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 01:08:48.544 06:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 01:08:48.544 06:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:08:48.544 06:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:08:48.544 06:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:08:48.544 06:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 01:08:48.544 06:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 01:08:48.544 06:07:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 01:08:49.478 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:08:49.478 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:08:49.478 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:08:49.478 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:08:49.478 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:08:49.478 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 6083dc76-cf1f-47fd-a564-0360fe11f39e 01:08:49.478 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:08:49.737 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 01:08:49.737 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 01:08:49.737 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6083dc76cf1f47fda5640360fe11f39e 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6083DC76CF1F47FDA5640360FE11F39E 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 6083DC76CF1F47FDA5640360FE11F39E == \6\0\8\3\D\C\7\6\C\F\1\F\4\7\F\D\A\5\6\4\0\3\6\0\F\E\1\1\F\3\9\E ]] 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0b2d56e1-ef70-4526-884c-b12dcc925eaf 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0b2d56e1ef704526884cb12dcc925eaf 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0B2D56E1EF704526884CB12DCC925EAF 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0B2D56E1EF704526884CB12DCC925EAF == \0\B\2\D\5\6\E\1\E\F\7\0\4\5\2\6\8\8\4\C\B\1\2\D\C\C\9\2\5\E\A\F ]] 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c2be6261-55ad-4321-8c03-1c21c78132ba 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c2be626155ad43218c031c21c78132ba 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C2BE626155AD43218C031C21C78132BA 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C2BE626155AD43218C031C21C78132BA == \C\2\B\E\6\2\6\1\5\5\A\D\4\3\2\1\8\C\0\3\1\C\2\1\C\7\8\1\3\2\B\A ]] 01:08:49.738 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73297 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73297 ']' 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73297 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73297 01:08:49.997 killing process with pid 73297 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73297' 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73297 01:08:49.997 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73297 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:08:50.566 rmmod nvme_tcp 01:08:50.566 rmmod nvme_fabrics 01:08:50.566 rmmod nvme_keyring 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 01:08:50.566 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73265 ']' 01:08:50.567 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73265 01:08:50.567 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73265 ']' 01:08:50.567 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73265 01:08:50.567 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73265 01:08:50.825 killing process with pid 73265 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73265' 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73265 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73265 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:08:50.825 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:51.084 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:51.344 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 01:08:51.344 01:08:51.344 real 0m5.734s 01:08:51.344 user 0m7.344s 01:08:51.344 sys 0m2.428s 01:08:51.344 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:51.344 ************************************ 01:08:51.344 END TEST nvmf_nsid 01:08:51.344 ************************************ 01:08:51.344 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:08:51.344 06:07:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:08:51.344 ************************************ 01:08:51.344 END TEST nvmf_target_extra 01:08:51.344 ************************************ 01:08:51.344 01:08:51.344 real 4m31.220s 01:08:51.344 user 8m36.133s 01:08:51.344 sys 1m22.883s 01:08:51.344 06:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:51.344 06:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:08:51.344 06:07:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 01:08:51.344 06:07:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:08:51.344 06:07:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:51.344 06:07:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:08:51.344 ************************************ 01:08:51.344 START TEST nvmf_host 01:08:51.344 ************************************ 01:08:51.344 06:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 01:08:51.603 * Looking for test storage... 01:08:51.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:08:51.603 06:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:51.603 06:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 01:08:51.603 06:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:51.603 --rc genhtml_branch_coverage=1 01:08:51.603 --rc genhtml_function_coverage=1 01:08:51.603 --rc genhtml_legend=1 01:08:51.603 --rc geninfo_all_blocks=1 01:08:51.603 --rc geninfo_unexecuted_blocks=1 01:08:51.603 01:08:51.603 ' 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:51.603 --rc genhtml_branch_coverage=1 01:08:51.603 --rc genhtml_function_coverage=1 01:08:51.603 --rc genhtml_legend=1 01:08:51.603 --rc geninfo_all_blocks=1 01:08:51.603 --rc geninfo_unexecuted_blocks=1 01:08:51.603 01:08:51.603 ' 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:51.603 --rc genhtml_branch_coverage=1 01:08:51.603 --rc genhtml_function_coverage=1 01:08:51.603 --rc genhtml_legend=1 01:08:51.603 --rc geninfo_all_blocks=1 01:08:51.603 --rc geninfo_unexecuted_blocks=1 01:08:51.603 01:08:51.603 ' 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:51.603 --rc genhtml_branch_coverage=1 01:08:51.603 --rc genhtml_function_coverage=1 01:08:51.603 --rc genhtml_legend=1 01:08:51.603 --rc geninfo_all_blocks=1 01:08:51.603 --rc geninfo_unexecuted_blocks=1 01:08:51.603 01:08:51.603 ' 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:51.603 06:07:46 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:08:51.604 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:08:51.604 ************************************ 01:08:51.604 START TEST nvmf_identify 01:08:51.604 ************************************ 01:08:51.604 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:08:51.864 * Looking for test storage... 01:08:51.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:51.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:51.864 --rc genhtml_branch_coverage=1 01:08:51.864 --rc genhtml_function_coverage=1 01:08:51.864 --rc genhtml_legend=1 01:08:51.864 --rc geninfo_all_blocks=1 01:08:51.864 --rc geninfo_unexecuted_blocks=1 01:08:51.864 01:08:51.864 ' 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:51.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:51.864 --rc genhtml_branch_coverage=1 01:08:51.864 --rc genhtml_function_coverage=1 01:08:51.864 --rc genhtml_legend=1 01:08:51.864 --rc geninfo_all_blocks=1 01:08:51.864 --rc geninfo_unexecuted_blocks=1 01:08:51.864 01:08:51.864 ' 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:51.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:51.864 --rc genhtml_branch_coverage=1 01:08:51.864 --rc genhtml_function_coverage=1 01:08:51.864 --rc genhtml_legend=1 01:08:51.864 --rc geninfo_all_blocks=1 01:08:51.864 --rc geninfo_unexecuted_blocks=1 01:08:51.864 01:08:51.864 ' 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:51.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:51.864 --rc genhtml_branch_coverage=1 01:08:51.864 --rc genhtml_function_coverage=1 01:08:51.864 --rc genhtml_legend=1 01:08:51.864 --rc geninfo_all_blocks=1 01:08:51.864 --rc geninfo_unexecuted_blocks=1 01:08:51.864 01:08:51.864 ' 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:51.864 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:08:51.865 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:51.865 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:08:52.124 Cannot find device "nvmf_init_br" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:08:52.124 Cannot find device "nvmf_init_br2" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:08:52.124 Cannot find device "nvmf_tgt_br" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:08:52.124 Cannot find device "nvmf_tgt_br2" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:08:52.124 Cannot find device "nvmf_init_br" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:08:52.124 Cannot find device "nvmf_init_br2" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:08:52.124 Cannot find device "nvmf_tgt_br" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:08:52.124 Cannot find device "nvmf_tgt_br2" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:08:52.124 Cannot find device "nvmf_br" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:08:52.124 Cannot find device "nvmf_init_if" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:08:52.124 Cannot find device "nvmf_init_if2" 01:08:52.124 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 01:08:52.125 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:52.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:52.125 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 01:08:52.125 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:52.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:52.125 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 01:08:52.125 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:08:52.125 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:52.125 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:08:52.125 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:08:52.384 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:08:52.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:52.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 01:08:52.644 01:08:52.644 --- 10.0.0.3 ping statistics --- 01:08:52.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:52.644 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:08:52.644 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:08:52.644 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 01:08:52.644 01:08:52.644 --- 10.0.0.4 ping statistics --- 01:08:52.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:52.644 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:52.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:52.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 01:08:52.644 01:08:52.644 --- 10.0.0.1 ping statistics --- 01:08:52.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:52.644 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:08:52.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:52.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 01:08:52.644 01:08:52.644 --- 10.0.0.2 ping statistics --- 01:08:52.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:52.644 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:52.644 06:07:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73659 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73659 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73659 ']' 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:52.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:52.644 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:52.644 [2024-12-09 06:07:47.077496] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:52.644 [2024-12-09 06:07:47.077690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:52.904 [2024-12-09 06:07:47.231183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:08:52.904 [2024-12-09 06:07:47.272555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:52.904 [2024-12-09 06:07:47.272601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:52.904 [2024-12-09 06:07:47.272611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:52.904 [2024-12-09 06:07:47.272619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:52.904 [2024-12-09 06:07:47.272626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:52.904 [2024-12-09 06:07:47.273543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:52.904 [2024-12-09 06:07:47.273714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:08:52.904 [2024-12-09 06:07:47.274690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:52.904 [2024-12-09 06:07:47.274690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:08:52.904 [2024-12-09 06:07:47.317080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:53.472 [2024-12-09 06:07:47.927444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:53.472 06:07:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:53.472 Malloc0 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:53.472 [2024-12-09 06:07:48.049887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:53.472 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:53.734 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:53.734 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 01:08:53.735 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:53.735 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:53.735 [ 01:08:53.735 { 01:08:53.735 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:08:53.735 "subtype": "Discovery", 01:08:53.735 "listen_addresses": [ 01:08:53.735 { 01:08:53.735 "trtype": "TCP", 01:08:53.735 "adrfam": "IPv4", 01:08:53.735 "traddr": "10.0.0.3", 01:08:53.735 "trsvcid": "4420" 01:08:53.735 } 01:08:53.735 ], 01:08:53.735 "allow_any_host": true, 01:08:53.735 "hosts": [] 01:08:53.735 }, 01:08:53.735 { 01:08:53.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:08:53.735 "subtype": "NVMe", 01:08:53.735 "listen_addresses": [ 01:08:53.735 { 01:08:53.735 "trtype": "TCP", 01:08:53.735 "adrfam": "IPv4", 01:08:53.735 "traddr": "10.0.0.3", 01:08:53.735 "trsvcid": "4420" 01:08:53.735 } 01:08:53.735 ], 01:08:53.735 "allow_any_host": true, 01:08:53.735 "hosts": [], 01:08:53.735 "serial_number": "SPDK00000000000001", 01:08:53.735 "model_number": "SPDK bdev Controller", 01:08:53.735 "max_namespaces": 32, 01:08:53.735 "min_cntlid": 1, 01:08:53.735 "max_cntlid": 65519, 01:08:53.735 "namespaces": [ 01:08:53.735 { 01:08:53.735 "nsid": 1, 01:08:53.735 "bdev_name": "Malloc0", 01:08:53.735 "name": "Malloc0", 01:08:53.735 "nguid": "ABCDEF0123456789ABCDEF0123456789", 01:08:53.735 "eui64": "ABCDEF0123456789", 01:08:53.735 "uuid": "6f24753a-0c97-4ec4-b4b7-34ca58e2288f" 01:08:53.735 } 01:08:53.735 ] 01:08:53.735 } 01:08:53.735 ] 01:08:53.735 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:53.735 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 01:08:53.735 [2024-12-09 06:07:48.127602] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:53.735 [2024-12-09 06:07:48.127652] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73694 ] 01:08:53.735 [2024-12-09 06:07:48.273007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 01:08:53.735 [2024-12-09 06:07:48.273055] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:08:53.735 [2024-12-09 06:07:48.273061] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:08:53.735 [2024-12-09 06:07:48.273074] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:08:53.735 [2024-12-09 06:07:48.273085] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:08:53.735 [2024-12-09 06:07:48.273409] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 01:08:53.735 [2024-12-09 06:07:48.273452] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12ad750 0 01:08:53.735 [2024-12-09 06:07:48.278124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:08:53.735 [2024-12-09 06:07:48.278146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:08:53.735 [2024-12-09 06:07:48.278152] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:08:53.735 [2024-12-09 06:07:48.278156] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:08:53.735 [2024-12-09 06:07:48.278187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.278193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.278198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.735 [2024-12-09 06:07:48.278208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:08:53.735 [2024-12-09 06:07:48.278236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.735 [2024-12-09 06:07:48.286121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.735 [2024-12-09 06:07:48.286137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.735 [2024-12-09 06:07:48.286142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.735 [2024-12-09 06:07:48.286158] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:08:53.735 [2024-12-09 06:07:48.286165] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 01:08:53.735 [2024-12-09 06:07:48.286171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 01:08:53.735 [2024-12-09 06:07:48.286186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.735 [2024-12-09 06:07:48.286203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.735 [2024-12-09 06:07:48.286226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.735 [2024-12-09 06:07:48.286274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.735 [2024-12-09 06:07:48.286280] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.735 [2024-12-09 06:07:48.286284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.735 [2024-12-09 06:07:48.286293] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 01:08:53.735 [2024-12-09 06:07:48.286301] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 01:08:53.735 [2024-12-09 06:07:48.286307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.735 [2024-12-09 06:07:48.286322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.735 [2024-12-09 06:07:48.286337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.735 [2024-12-09 06:07:48.286378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.735 [2024-12-09 06:07:48.286383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.735 [2024-12-09 06:07:48.286387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.735 [2024-12-09 06:07:48.286396] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 01:08:53.735 [2024-12-09 06:07:48.286404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 01:08:53.735 [2024-12-09 06:07:48.286410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.735 [2024-12-09 06:07:48.286424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.735 [2024-12-09 06:07:48.286439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.735 [2024-12-09 06:07:48.286484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.735 [2024-12-09 06:07:48.286490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.735 [2024-12-09 06:07:48.286493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.735 [2024-12-09 06:07:48.286503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:08:53.735 [2024-12-09 06:07:48.286511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.735 [2024-12-09 06:07:48.286525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.735 [2024-12-09 06:07:48.286539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.735 [2024-12-09 06:07:48.286586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.735 [2024-12-09 06:07:48.286592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.735 [2024-12-09 06:07:48.286596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.735 [2024-12-09 06:07:48.286605] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 01:08:53.735 [2024-12-09 06:07:48.286610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 01:08:53.735 [2024-12-09 06:07:48.286617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:08:53.735 [2024-12-09 06:07:48.286726] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 01:08:53.735 [2024-12-09 06:07:48.286732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:08:53.735 [2024-12-09 06:07:48.286740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.735 [2024-12-09 06:07:48.286748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.735 [2024-12-09 06:07:48.286754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.736 [2024-12-09 06:07:48.286769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.736 [2024-12-09 06:07:48.286806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.736 [2024-12-09 06:07:48.286812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.736 [2024-12-09 06:07:48.286815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.286819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.736 [2024-12-09 06:07:48.286824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:08:53.736 [2024-12-09 06:07:48.286832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.286836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.286840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.736 [2024-12-09 06:07:48.286846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.736 [2024-12-09 06:07:48.286861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.736 [2024-12-09 06:07:48.286894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.736 [2024-12-09 06:07:48.286899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.736 [2024-12-09 06:07:48.286903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.286907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.736 [2024-12-09 06:07:48.286912] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:08:53.736 [2024-12-09 06:07:48.286917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 01:08:53.736 [2024-12-09 06:07:48.286924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 01:08:53.736 [2024-12-09 06:07:48.286933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 01:08:53.736 [2024-12-09 06:07:48.286942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.286946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.736 [2024-12-09 06:07:48.286952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.736 [2024-12-09 06:07:48.286967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.736 [2024-12-09 06:07:48.287051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:53.736 [2024-12-09 06:07:48.287057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:53.736 [2024-12-09 06:07:48.287061] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287065] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ad750): datao=0, datal=4096, cccid=0 01:08:53.736 [2024-12-09 06:07:48.287070] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1311740) on tqpair(0x12ad750): expected_datao=0, payload_size=4096 01:08:53.736 [2024-12-09 06:07:48.287075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287082] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287104] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.736 [2024-12-09 06:07:48.287120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.736 [2024-12-09 06:07:48.287124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.736 [2024-12-09 06:07:48.287136] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 01:08:53.736 [2024-12-09 06:07:48.287141] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 01:08:53.736 [2024-12-09 06:07:48.287146] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 01:08:53.736 [2024-12-09 06:07:48.287152] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 01:08:53.736 [2024-12-09 06:07:48.287157] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 01:08:53.736 [2024-12-09 06:07:48.287162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 01:08:53.736 [2024-12-09 06:07:48.287170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 01:08:53.736 [2024-12-09 06:07:48.287177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.736 [2024-12-09 06:07:48.287191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:08:53.736 [2024-12-09 06:07:48.287209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.736 [2024-12-09 06:07:48.287254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.736 [2024-12-09 06:07:48.287260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.736 [2024-12-09 06:07:48.287264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.736 [2024-12-09 06:07:48.287281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ad750) 01:08:53.736 [2024-12-09 06:07:48.287294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:08:53.736 [2024-12-09 06:07:48.287301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12ad750) 01:08:53.736 [2024-12-09 06:07:48.287314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:08:53.736 [2024-12-09 06:07:48.287320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12ad750) 01:08:53.736 [2024-12-09 06:07:48.287333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:08:53.736 [2024-12-09 06:07:48.287339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.736 [2024-12-09 06:07:48.287352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:08:53.736 [2024-12-09 06:07:48.287357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 01:08:53.736 [2024-12-09 06:07:48.287365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:08:53.736 [2024-12-09 06:07:48.287371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ad750) 01:08:53.736 [2024-12-09 06:07:48.287381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.736 [2024-12-09 06:07:48.287398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311740, cid 0, qid 0 01:08:53.736 [2024-12-09 06:07:48.287403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13118c0, cid 1, qid 0 01:08:53.736 [2024-12-09 06:07:48.287408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311a40, cid 2, qid 0 01:08:53.736 [2024-12-09 06:07:48.287413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.736 [2024-12-09 06:07:48.287417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311d40, cid 4, qid 0 01:08:53.736 [2024-12-09 06:07:48.287482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.736 [2024-12-09 06:07:48.287488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.736 [2024-12-09 06:07:48.287492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311d40) on tqpair=0x12ad750 01:08:53.736 [2024-12-09 06:07:48.287501] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 01:08:53.736 [2024-12-09 06:07:48.287510] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 01:08:53.736 [2024-12-09 06:07:48.287519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ad750) 01:08:53.736 [2024-12-09 06:07:48.287530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.736 [2024-12-09 06:07:48.287545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311d40, cid 4, qid 0 01:08:53.736 [2024-12-09 06:07:48.287592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:53.736 [2024-12-09 06:07:48.287598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:53.736 [2024-12-09 06:07:48.287602] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287606] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ad750): datao=0, datal=4096, cccid=4 01:08:53.736 [2024-12-09 06:07:48.287610] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1311d40) on tqpair(0x12ad750): expected_datao=0, payload_size=4096 01:08:53.736 [2024-12-09 06:07:48.287615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287621] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287625] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:53.736 [2024-12-09 06:07:48.287634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.736 [2024-12-09 06:07:48.287639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.736 [2024-12-09 06:07:48.287643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311d40) on tqpair=0x12ad750 01:08:53.737 [2024-12-09 06:07:48.287659] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 01:08:53.737 [2024-12-09 06:07:48.287680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ad750) 01:08:53.737 [2024-12-09 06:07:48.287691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.737 [2024-12-09 06:07:48.287697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12ad750) 01:08:53.737 [2024-12-09 06:07:48.287711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:08:53.737 [2024-12-09 06:07:48.287731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311d40, cid 4, qid 0 01:08:53.737 [2024-12-09 06:07:48.287737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311ec0, cid 5, qid 0 01:08:53.737 [2024-12-09 06:07:48.287817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:53.737 [2024-12-09 06:07:48.287822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:53.737 [2024-12-09 06:07:48.287826] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287830] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ad750): datao=0, datal=1024, cccid=4 01:08:53.737 [2024-12-09 06:07:48.287835] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1311d40) on tqpair(0x12ad750): expected_datao=0, payload_size=1024 01:08:53.737 [2024-12-09 06:07:48.287839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287845] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287849] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.737 [2024-12-09 06:07:48.287860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.737 [2024-12-09 06:07:48.287864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311ec0) on tqpair=0x12ad750 01:08:53.737 [2024-12-09 06:07:48.287884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.737 [2024-12-09 06:07:48.287890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.737 [2024-12-09 06:07:48.287894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311d40) on tqpair=0x12ad750 01:08:53.737 [2024-12-09 06:07:48.287915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.287920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ad750) 01:08:53.737 [2024-12-09 06:07:48.287926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.737 [2024-12-09 06:07:48.287944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311d40, cid 4, qid 0 01:08:53.737 [2024-12-09 06:07:48.288009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:53.737 [2024-12-09 06:07:48.288015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:53.737 [2024-12-09 06:07:48.288019] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.288022] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ad750): datao=0, datal=3072, cccid=4 01:08:53.737 [2024-12-09 06:07:48.288027] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1311d40) on tqpair(0x12ad750): expected_datao=0, payload_size=3072 01:08:53.737 [2024-12-09 06:07:48.288032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.288038] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.288042] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.288051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.737 [2024-12-09 06:07:48.288057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.737 [2024-12-09 06:07:48.288060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.288064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311d40) on tqpair=0x12ad750 01:08:53.737 [2024-12-09 06:07:48.288072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.288076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ad750) 01:08:53.737 [2024-12-09 06:07:48.288082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.737 [2024-12-09 06:07:48.288112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311d40, cid 4, qid 0 01:08:53.737 [2024-12-09 06:07:48.288169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:53.737 [2024-12-09 06:07:48.288175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:53.737 [2024-12-09 06:07:48.288178] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.288182] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ad750): datao=0, datal=8, cccid=4 01:08:53.737 [2024-12-09 06:07:48.288187] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1311d40) on tqpair(0x12ad750): expected_datao=0, payload_size=8 01:08:53.737 [2024-12-09 06:07:48.288192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.288198] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:53.737 [2024-12-09 06:07:48.288201] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:53.737 ===================================================== 01:08:53.737 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 01:08:53.737 ===================================================== 01:08:53.737 Controller Capabilities/Features 01:08:53.737 ================================ 01:08:53.737 Vendor ID: 0000 01:08:53.737 Subsystem Vendor ID: 0000 01:08:53.737 Serial Number: .................... 01:08:53.737 Model Number: ........................................ 01:08:53.737 Firmware Version: 25.01 01:08:53.737 Recommended Arb Burst: 0 01:08:53.737 IEEE OUI Identifier: 00 00 00 01:08:53.737 Multi-path I/O 01:08:53.737 May have multiple subsystem ports: No 01:08:53.737 May have multiple controllers: No 01:08:53.737 Associated with SR-IOV VF: No 01:08:53.737 Max Data Transfer Size: 131072 01:08:53.737 Max Number of Namespaces: 0 01:08:53.737 Max Number of I/O Queues: 1024 01:08:53.737 NVMe Specification Version (VS): 1.3 01:08:53.737 NVMe Specification Version (Identify): 1.3 01:08:53.737 Maximum Queue Entries: 128 01:08:53.737 Contiguous Queues Required: Yes 01:08:53.737 Arbitration Mechanisms Supported 01:08:53.737 Weighted Round Robin: Not Supported 01:08:53.737 Vendor Specific: Not Supported 01:08:53.737 Reset Timeout: 15000 ms 01:08:53.737 Doorbell Stride: 4 bytes 01:08:53.737 NVM Subsystem Reset: Not Supported 01:08:53.737 Command Sets Supported 01:08:53.737 NVM Command Set: Supported 01:08:53.737 Boot Partition: Not Supported 01:08:53.737 Memory Page Size Minimum: 4096 bytes 01:08:53.737 Memory Page Size Maximum: 4096 bytes 01:08:53.737 Persistent Memory Region: Not Supported 01:08:53.737 Optional Asynchronous Events Supported 01:08:53.737 Namespace Attribute Notices: Not Supported 01:08:53.737 Firmware Activation Notices: Not Supported 01:08:53.737 ANA Change Notices: Not Supported 01:08:53.737 PLE Aggregate Log Change Notices: Not Supported 01:08:53.737 LBA Status Info Alert Notices: Not Supported 01:08:53.737 EGE Aggregate Log Change Notices: Not Supported 01:08:53.737 Normal NVM Subsystem Shutdown event: Not Supported 01:08:53.737 Zone Descriptor Change Notices: Not Supported 01:08:53.737 Discovery Log Change Notices: Supported 01:08:53.737 Controller Attributes 01:08:53.737 128-bit Host Identifier: Not Supported 01:08:53.737 Non-Operational Permissive Mode: Not Supported 01:08:53.737 NVM Sets: Not Supported 01:08:53.737 Read Recovery Levels: Not Supported 01:08:53.737 Endurance Groups: Not Supported 01:08:53.737 Predictable Latency Mode: Not Supported 01:08:53.737 Traffic Based Keep ALive: Not Supported 01:08:53.737 Namespace Granularity: Not Supported 01:08:53.737 SQ Associations: Not Supported 01:08:53.737 UUID List: Not Supported 01:08:53.737 Multi-Domain Subsystem: Not Supported 01:08:53.737 Fixed Capacity Management: Not Supported 01:08:53.737 Variable Capacity Management: Not Supported 01:08:53.737 Delete Endurance Group: Not Supported 01:08:53.737 Delete NVM Set: Not Supported 01:08:53.737 Extended LBA Formats Supported: Not Supported 01:08:53.737 Flexible Data Placement Supported: Not Supported 01:08:53.737 01:08:53.737 Controller Memory Buffer Support 01:08:53.737 ================================ 01:08:53.737 Supported: No 01:08:53.737 01:08:53.737 Persistent Memory Region Support 01:08:53.737 ================================ 01:08:53.737 Supported: No 01:08:53.737 01:08:53.737 Admin Command Set Attributes 01:08:53.737 ============================ 01:08:53.737 Security Send/Receive: Not Supported 01:08:53.737 Format NVM: Not Supported 01:08:53.737 Firmware Activate/Download: Not Supported 01:08:53.737 Namespace Management: Not Supported 01:08:53.737 Device Self-Test: Not Supported 01:08:53.737 Directives: Not Supported 01:08:53.737 NVMe-MI: Not Supported 01:08:53.737 Virtualization Management: Not Supported 01:08:53.737 Doorbell Buffer Config: Not Supported 01:08:53.737 Get LBA Status Capability: Not Supported 01:08:53.737 Command & Feature Lockdown Capability: Not Supported 01:08:53.737 Abort Command Limit: 1 01:08:53.737 Async Event Request Limit: 4 01:08:53.738 Number of Firmware Slots: N/A 01:08:53.738 Firmware Slot 1 Read-Only: N/A 01:08:53.738 Firmware Activation Without Reset: N/A 01:08:53.738 Multiple Update Detection Support: N/A 01:08:53.738 Firmware Update Granularity: No Information Provided 01:08:53.738 Per-Namespace SMART Log: No 01:08:53.738 Asymmetric Namespace Access Log Page: Not Supported 01:08:53.738 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:08:53.738 Command Effects Log Page: Not Supported 01:08:53.738 Get Log Page Extended Data: Supported 01:08:53.738 Telemetry Log Pages: Not Supported 01:08:53.738 Persistent Event Log Pages: Not Supported 01:08:53.738 Supported Log Pages Log Page: May Support 01:08:53.738 Commands Supported & Effects Log Page: Not Supported 01:08:53.738 Feature Identifiers & Effects Log Page:May Support 01:08:53.738 NVMe-MI Commands & Effects Log Page: May Support 01:08:53.738 Data Area 4 for Telemetry Log: Not Supported 01:08:53.738 Error Log Page Entries Supported: 128 01:08:53.738 Keep Alive: Not Supported 01:08:53.738 01:08:53.738 NVM Command Set Attributes 01:08:53.738 ========================== 01:08:53.738 Submission Queue Entry Size 01:08:53.738 Max: 1 01:08:53.738 Min: 1 01:08:53.738 Completion Queue Entry Size 01:08:53.738 Max: 1 01:08:53.738 Min: 1 01:08:53.738 Number of Namespaces: 0 01:08:53.738 Compare Command: Not Supported 01:08:53.738 Write Uncorrectable Command: Not Supported 01:08:53.738 Dataset Management Command: Not Supported 01:08:53.738 Write Zeroes Command: Not Supported 01:08:53.738 Set Features Save Field: Not Supported 01:08:53.738 Reservations: Not Supported 01:08:53.738 Timestamp: Not Supported 01:08:53.738 Copy: Not Supported 01:08:53.738 Volatile Write Cache: Not Present 01:08:53.738 Atomic Write Unit (Normal): 1 01:08:53.738 Atomic Write Unit (PFail): 1 01:08:53.738 Atomic Compare & Write Unit: 1 01:08:53.738 Fused Compare & Write: Supported 01:08:53.738 Scatter-Gather List 01:08:53.738 SGL Command Set: Supported 01:08:53.738 SGL Keyed: Supported 01:08:53.738 SGL Bit Bucket Descriptor: Not Supported 01:08:53.738 SGL Metadata Pointer: Not Supported 01:08:53.738 Oversized SGL: Not Supported 01:08:53.738 SGL Metadata Address: Not Supported 01:08:53.738 SGL Offset: Supported 01:08:53.738 Transport SGL Data Block: Not Supported 01:08:53.738 Replay Protected Memory Block: Not Supported 01:08:53.738 01:08:53.738 Firmware Slot Information 01:08:53.738 ========================= 01:08:53.738 Active slot: 0 01:08:53.738 01:08:53.738 01:08:53.738 Error Log 01:08:53.738 ========= 01:08:53.738 01:08:53.738 Active Namespaces 01:08:53.738 ================= 01:08:53.738 Discovery Log Page 01:08:53.738 ================== 01:08:53.738 Generation Counter: 2 01:08:53.738 Number of Records: 2 01:08:53.738 Record Format: 0 01:08:53.738 01:08:53.738 Discovery Log Entry 0 01:08:53.738 ---------------------- 01:08:53.738 Transport Type: 3 (TCP) 01:08:53.738 Address Family: 1 (IPv4) 01:08:53.738 Subsystem Type: 3 (Current Discovery Subsystem) 01:08:53.738 Entry Flags: 01:08:53.738 Duplicate Returned Information: 1 01:08:53.738 Explicit Persistent Connection Support for Discovery: 1 01:08:53.738 Transport Requirements: 01:08:53.738 Secure Channel: Not Required 01:08:53.738 Port ID: 0 (0x0000) 01:08:53.738 Controller ID: 65535 (0xffff) 01:08:53.738 Admin Max SQ Size: 128 01:08:53.738 Transport Service Identifier: 4420 01:08:53.738 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:08:53.738 Transport Address: 10.0.0.3 01:08:53.738 Discovery Log Entry 1 01:08:53.738 ---------------------- 01:08:53.738 Transport Type: 3 (TCP) 01:08:53.738 Address Family: 1 (IPv4) 01:08:53.738 Subsystem Type: 2 (NVM Subsystem) 01:08:53.738 Entry Flags: 01:08:53.738 Duplicate Returned Information: 0 01:08:53.738 Explicit Persistent Connection Support for Discovery: 0 01:08:53.738 Transport Requirements: 01:08:53.738 Secure Channel: Not Required 01:08:53.738 Port ID: 0 (0x0000) 01:08:53.738 Controller ID: 65535 (0xffff) 01:08:53.738 Admin Max SQ Size: 128 01:08:53.738 Transport Service Identifier: 4420 01:08:53.738 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 01:08:53.738 Transport Address: 10.0.0.3 [2024-12-09 06:07:48.288217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.738 [2024-12-09 06:07:48.288223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.738 [2024-12-09 06:07:48.288227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311d40) on tqpair=0x12ad750 01:08:53.738 [2024-12-09 06:07:48.288313] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 01:08:53.738 [2024-12-09 06:07:48.288323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311740) on tqpair=0x12ad750 01:08:53.738 [2024-12-09 06:07:48.288329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:53.738 [2024-12-09 06:07:48.288335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13118c0) on tqpair=0x12ad750 01:08:53.738 [2024-12-09 06:07:48.288339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:53.738 [2024-12-09 06:07:48.288345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311a40) on tqpair=0x12ad750 01:08:53.738 [2024-12-09 06:07:48.288349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:53.738 [2024-12-09 06:07:48.288354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.738 [2024-12-09 06:07:48.288359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:53.738 [2024-12-09 06:07:48.288367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.738 [2024-12-09 06:07:48.288381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.738 [2024-12-09 06:07:48.288397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.738 [2024-12-09 06:07:48.288430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.738 [2024-12-09 06:07:48.288436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.738 [2024-12-09 06:07:48.288440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.738 [2024-12-09 06:07:48.288450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.738 [2024-12-09 06:07:48.288464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.738 [2024-12-09 06:07:48.288482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.738 [2024-12-09 06:07:48.288528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.738 [2024-12-09 06:07:48.288534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.738 [2024-12-09 06:07:48.288537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.738 [2024-12-09 06:07:48.288550] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 01:08:53.738 [2024-12-09 06:07:48.288555] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 01:08:53.738 [2024-12-09 06:07:48.288564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.738 [2024-12-09 06:07:48.288578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.738 [2024-12-09 06:07:48.288593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.738 [2024-12-09 06:07:48.288637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.738 [2024-12-09 06:07:48.288643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.738 [2024-12-09 06:07:48.288647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.738 [2024-12-09 06:07:48.288659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.738 [2024-12-09 06:07:48.288673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.738 [2024-12-09 06:07:48.288688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.738 [2024-12-09 06:07:48.288729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.738 [2024-12-09 06:07:48.288735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.738 [2024-12-09 06:07:48.288738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.738 [2024-12-09 06:07:48.288742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.288751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.288755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.288759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.288765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.288779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.288817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.288823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.288826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.288830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.288839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.288843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.288847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.288853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.288867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.288901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.288907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.288911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.288914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.288923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.288927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.288931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.288937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.288951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.288990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.288995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.288999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.289011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.289026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.289040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.289077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.289082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.289095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.289108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.289122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.289137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.289174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.289180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.289184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.289196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.289210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.289225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.289259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.289264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.289268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.289280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.289294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.289308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.289346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.289351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.289355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.289367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.289381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.289406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.289448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.289454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.289458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.289470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.289484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.289499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.289535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.289540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.289544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.739 [2024-12-09 06:07:48.289556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.739 [2024-12-09 06:07:48.289570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.739 [2024-12-09 06:07:48.289585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.739 [2024-12-09 06:07:48.289623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.739 [2024-12-09 06:07:48.289629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.739 [2024-12-09 06:07:48.289633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.739 [2024-12-09 06:07:48.289636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.740 [2024-12-09 06:07:48.289645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.740 [2024-12-09 06:07:48.289659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.740 [2024-12-09 06:07:48.289673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.740 [2024-12-09 06:07:48.289710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.740 [2024-12-09 06:07:48.289716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.740 [2024-12-09 06:07:48.289719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.740 [2024-12-09 06:07:48.289732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.740 [2024-12-09 06:07:48.289746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.740 [2024-12-09 06:07:48.289760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.740 [2024-12-09 06:07:48.289796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.740 [2024-12-09 06:07:48.289802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.740 [2024-12-09 06:07:48.289806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.740 [2024-12-09 06:07:48.289818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.740 [2024-12-09 06:07:48.289832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.740 [2024-12-09 06:07:48.289846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.740 [2024-12-09 06:07:48.289883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.740 [2024-12-09 06:07:48.289889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.740 [2024-12-09 06:07:48.289892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.740 [2024-12-09 06:07:48.289905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.740 [2024-12-09 06:07:48.289919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.740 [2024-12-09 06:07:48.289934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.740 [2024-12-09 06:07:48.289978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.740 [2024-12-09 06:07:48.289984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.740 [2024-12-09 06:07:48.289987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.289991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.740 [2024-12-09 06:07:48.289999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.290004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.290008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.740 [2024-12-09 06:07:48.290014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.740 [2024-12-09 06:07:48.290028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.740 [2024-12-09 06:07:48.290061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.740 [2024-12-09 06:07:48.290067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.740 [2024-12-09 06:07:48.290071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.290075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.740 [2024-12-09 06:07:48.290083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.294107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.294114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ad750) 01:08:53.740 [2024-12-09 06:07:48.294121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:53.740 [2024-12-09 06:07:48.294141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1311bc0, cid 3, qid 0 01:08:53.740 [2024-12-09 06:07:48.294191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:53.740 [2024-12-09 06:07:48.294197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:53.740 [2024-12-09 06:07:48.294201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:53.740 [2024-12-09 06:07:48.294205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1311bc0) on tqpair=0x12ad750 01:08:53.740 [2024-12-09 06:07:48.294212] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 01:08:53.740 01:08:53.740 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 01:08:54.002 [2024-12-09 06:07:48.339033] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:54.002 [2024-12-09 06:07:48.339078] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73697 ] 01:08:54.002 [2024-12-09 06:07:48.485731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 01:08:54.002 [2024-12-09 06:07:48.485778] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:08:54.002 [2024-12-09 06:07:48.485783] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:08:54.002 [2024-12-09 06:07:48.485795] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:08:54.002 [2024-12-09 06:07:48.485804] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:08:54.002 [2024-12-09 06:07:48.486055] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 01:08:54.002 [2024-12-09 06:07:48.486119] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1954750 0 01:08:54.002 [2024-12-09 06:07:48.499108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:08:54.002 [2024-12-09 06:07:48.499129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:08:54.002 [2024-12-09 06:07:48.499133] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:08:54.002 [2024-12-09 06:07:48.499137] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:08:54.002 [2024-12-09 06:07:48.499162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.002 [2024-12-09 06:07:48.499168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.002 [2024-12-09 06:07:48.499172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.003 [2024-12-09 06:07:48.499181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:08:54.003 [2024-12-09 06:07:48.499207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.003 [2024-12-09 06:07:48.507105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.003 [2024-12-09 06:07:48.507121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.003 [2024-12-09 06:07:48.507126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.003 [2024-12-09 06:07:48.507138] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:08:54.003 [2024-12-09 06:07:48.507160] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 01:08:54.003 [2024-12-09 06:07:48.507167] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 01:08:54.003 [2024-12-09 06:07:48.507181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.003 [2024-12-09 06:07:48.507197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.003 [2024-12-09 06:07:48.507219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.003 [2024-12-09 06:07:48.507263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.003 [2024-12-09 06:07:48.507269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.003 [2024-12-09 06:07:48.507272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.003 [2024-12-09 06:07:48.507281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 01:08:54.003 [2024-12-09 06:07:48.507289] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 01:08:54.003 [2024-12-09 06:07:48.507296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.003 [2024-12-09 06:07:48.507309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.003 [2024-12-09 06:07:48.507325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.003 [2024-12-09 06:07:48.507367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.003 [2024-12-09 06:07:48.507373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.003 [2024-12-09 06:07:48.507376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.003 [2024-12-09 06:07:48.507385] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 01:08:54.003 [2024-12-09 06:07:48.507393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 01:08:54.003 [2024-12-09 06:07:48.507399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.003 [2024-12-09 06:07:48.507413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.003 [2024-12-09 06:07:48.507428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.003 [2024-12-09 06:07:48.507464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.003 [2024-12-09 06:07:48.507470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.003 [2024-12-09 06:07:48.507473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.003 [2024-12-09 06:07:48.507483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:08:54.003 [2024-12-09 06:07:48.507491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.003 [2024-12-09 06:07:48.507505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.003 [2024-12-09 06:07:48.507520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.003 [2024-12-09 06:07:48.507559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.003 [2024-12-09 06:07:48.507564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.003 [2024-12-09 06:07:48.507568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.003 [2024-12-09 06:07:48.507576] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 01:08:54.003 [2024-12-09 06:07:48.507582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 01:08:54.003 [2024-12-09 06:07:48.507589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:08:54.003 [2024-12-09 06:07:48.507698] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 01:08:54.003 [2024-12-09 06:07:48.507703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:08:54.003 [2024-12-09 06:07:48.507710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.003 [2024-12-09 06:07:48.507724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.003 [2024-12-09 06:07:48.507740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.003 [2024-12-09 06:07:48.507782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.003 [2024-12-09 06:07:48.507788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.003 [2024-12-09 06:07:48.507792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.003 [2024-12-09 06:07:48.507800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:08:54.003 [2024-12-09 06:07:48.507809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.003 [2024-12-09 06:07:48.507823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.003 [2024-12-09 06:07:48.507838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.003 [2024-12-09 06:07:48.507878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.003 [2024-12-09 06:07:48.507884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.003 [2024-12-09 06:07:48.507887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.003 [2024-12-09 06:07:48.507896] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:08:54.003 [2024-12-09 06:07:48.507901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 01:08:54.003 [2024-12-09 06:07:48.507908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 01:08:54.003 [2024-12-09 06:07:48.507917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 01:08:54.003 [2024-12-09 06:07:48.507925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.507929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.003 [2024-12-09 06:07:48.507935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.003 [2024-12-09 06:07:48.507950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.003 [2024-12-09 06:07:48.508025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:54.003 [2024-12-09 06:07:48.508031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:54.003 [2024-12-09 06:07:48.508035] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.508039] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1954750): datao=0, datal=4096, cccid=0 01:08:54.003 [2024-12-09 06:07:48.508044] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b8740) on tqpair(0x1954750): expected_datao=0, payload_size=4096 01:08:54.003 [2024-12-09 06:07:48.508049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.508056] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.508060] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.508068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.003 [2024-12-09 06:07:48.508074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.003 [2024-12-09 06:07:48.508078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.003 [2024-12-09 06:07:48.508082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.003 [2024-12-09 06:07:48.508089] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 01:08:54.003 [2024-12-09 06:07:48.508094] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 01:08:54.003 [2024-12-09 06:07:48.508099] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 01:08:54.003 [2024-12-09 06:07:48.508114] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 01:08:54.004 [2024-12-09 06:07:48.508119] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 01:08:54.004 [2024-12-09 06:07:48.508124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:08:54.004 [2024-12-09 06:07:48.508169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.004 [2024-12-09 06:07:48.508213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.004 [2024-12-09 06:07:48.508218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.004 [2024-12-09 06:07:48.508222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.004 [2024-12-09 06:07:48.508236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:08:54.004 [2024-12-09 06:07:48.508255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:08:54.004 [2024-12-09 06:07:48.508274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:08:54.004 [2024-12-09 06:07:48.508293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:08:54.004 [2024-12-09 06:07:48.508311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.004 [2024-12-09 06:07:48.508351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8740, cid 0, qid 0 01:08:54.004 [2024-12-09 06:07:48.508356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b88c0, cid 1, qid 0 01:08:54.004 [2024-12-09 06:07:48.508361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8a40, cid 2, qid 0 01:08:54.004 [2024-12-09 06:07:48.508366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.004 [2024-12-09 06:07:48.508370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8d40, cid 4, qid 0 01:08:54.004 [2024-12-09 06:07:48.508437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.004 [2024-12-09 06:07:48.508443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.004 [2024-12-09 06:07:48.508447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8d40) on tqpair=0x1954750 01:08:54.004 [2024-12-09 06:07:48.508456] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 01:08:54.004 [2024-12-09 06:07:48.508464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:08:54.004 [2024-12-09 06:07:48.508514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8d40, cid 4, qid 0 01:08:54.004 [2024-12-09 06:07:48.508560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.004 [2024-12-09 06:07:48.508566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.004 [2024-12-09 06:07:48.508570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8d40) on tqpair=0x1954750 01:08:54.004 [2024-12-09 06:07:48.508624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.004 [2024-12-09 06:07:48.508665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8d40, cid 4, qid 0 01:08:54.004 [2024-12-09 06:07:48.508713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:54.004 [2024-12-09 06:07:48.508719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:54.004 [2024-12-09 06:07:48.508722] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508726] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1954750): datao=0, datal=4096, cccid=4 01:08:54.004 [2024-12-09 06:07:48.508731] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b8d40) on tqpair(0x1954750): expected_datao=0, payload_size=4096 01:08:54.004 [2024-12-09 06:07:48.508736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508742] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508745] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.004 [2024-12-09 06:07:48.508759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.004 [2024-12-09 06:07:48.508763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8d40) on tqpair=0x1954750 01:08:54.004 [2024-12-09 06:07:48.508779] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 01:08:54.004 [2024-12-09 06:07:48.508791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.004 [2024-12-09 06:07:48.508832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8d40, cid 4, qid 0 01:08:54.004 [2024-12-09 06:07:48.508888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:54.004 [2024-12-09 06:07:48.508893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:54.004 [2024-12-09 06:07:48.508897] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508901] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1954750): datao=0, datal=4096, cccid=4 01:08:54.004 [2024-12-09 06:07:48.508906] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b8d40) on tqpair(0x1954750): expected_datao=0, payload_size=4096 01:08:54.004 [2024-12-09 06:07:48.508910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508916] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508920] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.004 [2024-12-09 06:07:48.508934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.004 [2024-12-09 06:07:48.508938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8d40) on tqpair=0x1954750 01:08:54.004 [2024-12-09 06:07:48.508954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 01:08:54.004 [2024-12-09 06:07:48.508969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.004 [2024-12-09 06:07:48.508973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1954750) 01:08:54.004 [2024-12-09 06:07:48.508979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.005 [2024-12-09 06:07:48.508994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8d40, cid 4, qid 0 01:08:54.005 [2024-12-09 06:07:48.509045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:54.005 [2024-12-09 06:07:48.509051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:54.005 [2024-12-09 06:07:48.509055] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509058] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1954750): datao=0, datal=4096, cccid=4 01:08:54.005 [2024-12-09 06:07:48.509063] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b8d40) on tqpair(0x1954750): expected_datao=0, payload_size=4096 01:08:54.005 [2024-12-09 06:07:48.509067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509074] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509077] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.005 [2024-12-09 06:07:48.509103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.005 [2024-12-09 06:07:48.509107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8d40) on tqpair=0x1954750 01:08:54.005 [2024-12-09 06:07:48.509118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 01:08:54.005 [2024-12-09 06:07:48.509125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 01:08:54.005 [2024-12-09 06:07:48.509135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 01:08:54.005 [2024-12-09 06:07:48.509141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 01:08:54.005 [2024-12-09 06:07:48.509146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 01:08:54.005 [2024-12-09 06:07:48.509152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 01:08:54.005 [2024-12-09 06:07:48.509157] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 01:08:54.005 [2024-12-09 06:07:48.509162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 01:08:54.005 [2024-12-09 06:07:48.509168] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 01:08:54.005 [2024-12-09 06:07:48.509181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1954750) 01:08:54.005 [2024-12-09 06:07:48.509191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.005 [2024-12-09 06:07:48.509197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1954750) 01:08:54.005 [2024-12-09 06:07:48.509210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:08:54.005 [2024-12-09 06:07:48.509230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8d40, cid 4, qid 0 01:08:54.005 [2024-12-09 06:07:48.509235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8ec0, cid 5, qid 0 01:08:54.005 [2024-12-09 06:07:48.509285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.005 [2024-12-09 06:07:48.509291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.005 [2024-12-09 06:07:48.509295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8d40) on tqpair=0x1954750 01:08:54.005 [2024-12-09 06:07:48.509305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.005 [2024-12-09 06:07:48.509310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.005 [2024-12-09 06:07:48.509314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8ec0) on tqpair=0x1954750 01:08:54.005 [2024-12-09 06:07:48.509327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1954750) 01:08:54.005 [2024-12-09 06:07:48.509336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.005 [2024-12-09 06:07:48.509352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8ec0, cid 5, qid 0 01:08:54.005 [2024-12-09 06:07:48.509386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.005 [2024-12-09 06:07:48.509401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.005 [2024-12-09 06:07:48.509405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8ec0) on tqpair=0x1954750 01:08:54.005 [2024-12-09 06:07:48.509418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1954750) 01:08:54.005 [2024-12-09 06:07:48.509428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.005 [2024-12-09 06:07:48.509443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8ec0, cid 5, qid 0 01:08:54.005 [2024-12-09 06:07:48.509487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.005 [2024-12-09 06:07:48.509492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.005 [2024-12-09 06:07:48.509496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8ec0) on tqpair=0x1954750 01:08:54.005 [2024-12-09 06:07:48.509509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1954750) 01:08:54.005 [2024-12-09 06:07:48.509519] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.005 [2024-12-09 06:07:48.509534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8ec0, cid 5, qid 0 01:08:54.005 [2024-12-09 06:07:48.509573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.005 [2024-12-09 06:07:48.509579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.005 [2024-12-09 06:07:48.509582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8ec0) on tqpair=0x1954750 01:08:54.005 [2024-12-09 06:07:48.509601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1954750) 01:08:54.005 [2024-12-09 06:07:48.509611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.005 [2024-12-09 06:07:48.509618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1954750) 01:08:54.005 [2024-12-09 06:07:48.509628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.005 [2024-12-09 06:07:48.509635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1954750) 01:08:54.005 [2024-12-09 06:07:48.509644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.005 [2024-12-09 06:07:48.509654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1954750) 01:08:54.005 [2024-12-09 06:07:48.509663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.005 [2024-12-09 06:07:48.509679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8ec0, cid 5, qid 0 01:08:54.005 [2024-12-09 06:07:48.509685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8d40, cid 4, qid 0 01:08:54.005 [2024-12-09 06:07:48.509689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b9040, cid 6, qid 0 01:08:54.005 [2024-12-09 06:07:48.509694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b91c0, cid 7, qid 0 01:08:54.005 [2024-12-09 06:07:48.509809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:54.005 [2024-12-09 06:07:48.509815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:54.005 [2024-12-09 06:07:48.509819] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509823] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1954750): datao=0, datal=8192, cccid=5 01:08:54.005 [2024-12-09 06:07:48.509828] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b8ec0) on tqpair(0x1954750): expected_datao=0, payload_size=8192 01:08:54.005 [2024-12-09 06:07:48.509833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509849] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509853] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:54.005 [2024-12-09 06:07:48.509864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:54.005 [2024-12-09 06:07:48.509868] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509871] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1954750): datao=0, datal=512, cccid=4 01:08:54.005 [2024-12-09 06:07:48.509876] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b8d40) on tqpair(0x1954750): expected_datao=0, payload_size=512 01:08:54.005 [2024-12-09 06:07:48.509881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509887] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509890] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:54.005 [2024-12-09 06:07:48.509901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:54.005 [2024-12-09 06:07:48.509905] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:54.005 [2024-12-09 06:07:48.509908] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1954750): datao=0, datal=512, cccid=6 01:08:54.006 [2024-12-09 06:07:48.509913] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b9040) on tqpair(0x1954750): expected_datao=0, payload_size=512 01:08:54.006 [2024-12-09 06:07:48.509917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.509923] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.509927] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.509932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:08:54.006 [2024-12-09 06:07:48.509937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:08:54.006 [2024-12-09 06:07:48.509941] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.509945] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1954750): datao=0, datal=4096, cccid=7 01:08:54.006 [2024-12-09 06:07:48.509949] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b91c0) on tqpair(0x1954750): expected_datao=0, payload_size=4096 01:08:54.006 [2024-12-09 06:07:48.509954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.509960] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.509964] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.509969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.006 [2024-12-09 06:07:48.509974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.006 [2024-12-09 06:07:48.509978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.509982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8ec0) on tqpair=0x1954750 01:08:54.006 [2024-12-09 06:07:48.509994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.006 [2024-12-09 06:07:48.510000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.006 [2024-12-09 06:07:48.510003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.510008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8d40) on tqpair=0x1954750 01:08:54.006 [2024-12-09 06:07:48.510020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.006 [2024-12-09 06:07:48.510025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.006 [2024-12-09 06:07:48.510029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.006 [2024-12-09 06:07:48.510033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b9040) on tqpair=0x1954750 01:08:54.006 [2024-12-09 06:07:48.510040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.006 [2024-12-09 06:07:48.510045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.006 ===================================================== 01:08:54.006 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:08:54.006 ===================================================== 01:08:54.006 Controller Capabilities/Features 01:08:54.006 ================================ 01:08:54.006 Vendor ID: 8086 01:08:54.006 Subsystem Vendor ID: 8086 01:08:54.006 Serial Number: SPDK00000000000001 01:08:54.006 Model Number: SPDK bdev Controller 01:08:54.006 Firmware Version: 25.01 01:08:54.006 Recommended Arb Burst: 6 01:08:54.006 IEEE OUI Identifier: e4 d2 5c 01:08:54.006 Multi-path I/O 01:08:54.006 May have multiple subsystem ports: Yes 01:08:54.006 May have multiple controllers: Yes 01:08:54.006 Associated with SR-IOV VF: No 01:08:54.006 Max Data Transfer Size: 131072 01:08:54.006 Max Number of Namespaces: 32 01:08:54.006 Max Number of I/O Queues: 127 01:08:54.006 NVMe Specification Version (VS): 1.3 01:08:54.006 NVMe Specification Version (Identify): 1.3 01:08:54.006 Maximum Queue Entries: 128 01:08:54.006 Contiguous Queues Required: Yes 01:08:54.006 Arbitration Mechanisms Supported 01:08:54.006 Weighted Round Robin: Not Supported 01:08:54.006 Vendor Specific: Not Supported 01:08:54.006 Reset Timeout: 15000 ms 01:08:54.006 Doorbell Stride: 4 bytes 01:08:54.006 NVM Subsystem Reset: Not Supported 01:08:54.006 Command Sets Supported 01:08:54.006 NVM Command Set: Supported 01:08:54.006 Boot Partition: Not Supported 01:08:54.006 Memory Page Size Minimum: 4096 bytes 01:08:54.006 Memory Page Size Maximum: 4096 bytes 01:08:54.006 Persistent Memory Region: Not Supported 01:08:54.006 Optional Asynchronous Events Supported 01:08:54.006 Namespace Attribute Notices: Supported 01:08:54.006 Firmware Activation Notices: Not Supported 01:08:54.006 ANA Change Notices: Not Supported 01:08:54.006 PLE Aggregate Log Change Notices: Not Supported 01:08:54.006 LBA Status Info Alert Notices: Not Supported 01:08:54.006 EGE Aggregate Log Change Notices: Not Supported 01:08:54.006 Normal NVM Subsystem Shutdown event: Not Supported 01:08:54.006 Zone Descriptor Change Notices: Not Supported 01:08:54.006 Discovery Log Change Notices: Not Supported 01:08:54.006 Controller Attributes 01:08:54.006 128-bit Host Identifier: Supported 01:08:54.006 Non-Operational Permissive Mode: Not Supported 01:08:54.006 NVM Sets: Not Supported 01:08:54.006 Read Recovery Levels: Not Supported 01:08:54.006 Endurance Groups: Not Supported 01:08:54.006 Predictable Latency Mode: Not Supported 01:08:54.006 Traffic Based Keep ALive: Not Supported 01:08:54.006 Namespace Granularity: Not Supported 01:08:54.006 SQ Associations: Not Supported 01:08:54.006 UUID List: Not Supported 01:08:54.006 Multi-Domain Subsystem: Not Supported 01:08:54.006 Fixed Capacity Management: Not Supported 01:08:54.006 Variable Capacity Management: Not Supported 01:08:54.006 Delete Endurance Group: Not Supported 01:08:54.006 Delete NVM Set: Not Supported 01:08:54.006 Extended LBA Formats Supported: Not Supported 01:08:54.006 Flexible Data Placement Supported: Not Supported 01:08:54.006 01:08:54.006 Controller Memory Buffer Support 01:08:54.006 ================================ 01:08:54.006 Supported: No 01:08:54.006 01:08:54.006 Persistent Memory Region Support 01:08:54.006 ================================ 01:08:54.006 Supported: No 01:08:54.006 01:08:54.006 Admin Command Set Attributes 01:08:54.006 ============================ 01:08:54.006 Security Send/Receive: Not Supported 01:08:54.006 Format NVM: Not Supported 01:08:54.006 Firmware Activate/Download: Not Supported 01:08:54.006 Namespace Management: Not Supported 01:08:54.006 Device Self-Test: Not Supported 01:08:54.006 Directives: Not Supported 01:08:54.006 NVMe-MI: Not Supported 01:08:54.006 Virtualization Management: Not Supported 01:08:54.006 Doorbell Buffer Config: Not Supported 01:08:54.006 Get LBA Status Capability: Not Supported 01:08:54.006 Command & Feature Lockdown Capability: Not Supported 01:08:54.006 Abort Command Limit: 4 01:08:54.006 Async Event Request Limit: 4 01:08:54.006 Number of Firmware Slots: N/A 01:08:54.006 Firmware Slot 1 Read-Only: N/A 01:08:54.006 Firmware Activation Without Reset: N/A 01:08:54.006 Multiple Update Detection Support: N/A 01:08:54.006 Firmware Update Granularity: No Information Provided 01:08:54.006 Per-Namespace SMART Log: No 01:08:54.006 Asymmetric Namespace Access Log Page: Not Supported 01:08:54.006 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 01:08:54.006 Command Effects Log Page: Supported 01:08:54.006 Get Log Page Extended Data: Supported 01:08:54.006 Telemetry Log Pages: Not Supported 01:08:54.006 Persistent Event Log Pages: Not Supported 01:08:54.006 Supported Log Pages Log Page: May Support 01:08:54.006 Commands Supported & Effects Log Page: Not Supported 01:08:54.006 Feature Identifiers & Effects Log Page:May Support 01:08:54.006 NVMe-MI Commands & Effects Log Page: May Support 01:08:54.006 Data Area 4 for Telemetry Log: Not Supported 01:08:54.006 Error Log Page Entries Supported: 128 01:08:54.006 Keep Alive: Supported 01:08:54.006 Keep Alive Granularity: 10000 ms 01:08:54.006 01:08:54.006 NVM Command Set Attributes 01:08:54.006 ========================== 01:08:54.006 Submission Queue Entry Size 01:08:54.006 Max: 64 01:08:54.006 Min: 64 01:08:54.006 Completion Queue Entry Size 01:08:54.006 Max: 16 01:08:54.006 Min: 16 01:08:54.006 Number of Namespaces: 32 01:08:54.006 Compare Command: Supported 01:08:54.006 Write Uncorrectable Command: Not Supported 01:08:54.006 Dataset Management Command: Supported 01:08:54.006 Write Zeroes Command: Supported 01:08:54.006 Set Features Save Field: Not Supported 01:08:54.006 Reservations: Supported 01:08:54.006 Timestamp: Not Supported 01:08:54.006 Copy: Supported 01:08:54.006 Volatile Write Cache: Present 01:08:54.006 Atomic Write Unit (Normal): 1 01:08:54.006 Atomic Write Unit (PFail): 1 01:08:54.006 Atomic Compare & Write Unit: 1 01:08:54.006 Fused Compare & Write: Supported 01:08:54.006 Scatter-Gather List 01:08:54.006 SGL Command Set: Supported 01:08:54.006 SGL Keyed: Supported 01:08:54.006 SGL Bit Bucket Descriptor: Not Supported 01:08:54.006 SGL Metadata Pointer: Not Supported 01:08:54.006 Oversized SGL: Not Supported 01:08:54.006 SGL Metadata Address: Not Supported 01:08:54.006 SGL Offset: Supported 01:08:54.006 Transport SGL Data Block: Not Supported 01:08:54.006 Replay Protected Memory Block: Not Supported 01:08:54.006 01:08:54.006 Firmware Slot Information 01:08:54.006 ========================= 01:08:54.006 Active slot: 1 01:08:54.006 Slot 1 Firmware Revision: 25.01 01:08:54.006 01:08:54.006 01:08:54.006 Commands Supported and Effects 01:08:54.006 ============================== 01:08:54.007 Admin Commands 01:08:54.007 -------------- 01:08:54.007 Get Log Page (02h): Supported 01:08:54.007 Identify (06h): Supported 01:08:54.007 Abort (08h): Supported 01:08:54.007 Set Features (09h): Supported 01:08:54.007 Get Features (0Ah): Supported 01:08:54.007 Asynchronous Event Request (0Ch): Supported 01:08:54.007 Keep Alive (18h): Supported 01:08:54.007 I/O Commands 01:08:54.007 ------------ 01:08:54.007 Flush (00h): Supported LBA-Change 01:08:54.007 Write (01h): Supported LBA-Change 01:08:54.007 Read (02h): Supported 01:08:54.007 Compare (05h): Supported 01:08:54.007 Write Zeroes (08h): Supported LBA-Change 01:08:54.007 Dataset Management (09h): Supported LBA-Change 01:08:54.007 Copy (19h): Supported LBA-Change 01:08:54.007 01:08:54.007 Error Log 01:08:54.007 ========= 01:08:54.007 01:08:54.007 Arbitration 01:08:54.007 =========== 01:08:54.007 Arbitration Burst: 1 01:08:54.007 01:08:54.007 Power Management 01:08:54.007 ================ 01:08:54.007 Number of Power States: 1 01:08:54.007 Current Power State: Power State #0 01:08:54.007 Power State #0: 01:08:54.007 Max Power: 0.00 W 01:08:54.007 Non-Operational State: Operational 01:08:54.007 Entry Latency: Not Reported 01:08:54.007 Exit Latency: Not Reported 01:08:54.007 Relative Read Throughput: 0 01:08:54.007 Relative Read Latency: 0 01:08:54.007 Relative Write Throughput: 0 01:08:54.007 Relative Write Latency: 0 01:08:54.007 Idle Power: Not Reported 01:08:54.007 Active Power: Not Reported 01:08:54.007 Non-Operational Permissive Mode: Not Supported 01:08:54.007 01:08:54.007 Health Information 01:08:54.007 ================== 01:08:54.007 Critical Warnings: 01:08:54.007 Available Spare Space: OK 01:08:54.007 Temperature: OK 01:08:54.007 Device Reliability: OK 01:08:54.007 Read Only: No 01:08:54.007 Volatile Memory Backup: OK 01:08:54.007 Current Temperature: 0 Kelvin (-273 Celsius) 01:08:54.007 Temperature Threshold: 0 Kelvin (-273 Celsius) 01:08:54.007 Available Spare: 0% 01:08:54.007 Available Spare Threshold: 0% 01:08:54.007 Life Percentage Used:[2024-12-09 06:07:48.510049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b91c0) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1954750) 01:08:54.007 [2024-12-09 06:07:48.510166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.007 [2024-12-09 06:07:48.510184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b91c0, cid 7, qid 0 01:08:54.007 [2024-12-09 06:07:48.510219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.007 [2024-12-09 06:07:48.510225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.007 [2024-12-09 06:07:48.510229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b91c0) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510264] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 01:08:54.007 [2024-12-09 06:07:48.510273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8740) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:54.007 [2024-12-09 06:07:48.510284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b88c0) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:54.007 [2024-12-09 06:07:48.510294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8a40) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:54.007 [2024-12-09 06:07:48.510304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:54.007 [2024-12-09 06:07:48.510316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.007 [2024-12-09 06:07:48.510330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.007 [2024-12-09 06:07:48.510347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.007 [2024-12-09 06:07:48.510382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.007 [2024-12-09 06:07:48.510388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.007 [2024-12-09 06:07:48.510392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.007 [2024-12-09 06:07:48.510416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.007 [2024-12-09 06:07:48.510434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.007 [2024-12-09 06:07:48.510491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.007 [2024-12-09 06:07:48.510496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.007 [2024-12-09 06:07:48.510500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510509] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 01:08:54.007 [2024-12-09 06:07:48.510514] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 01:08:54.007 [2024-12-09 06:07:48.510522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.007 [2024-12-09 06:07:48.510536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.007 [2024-12-09 06:07:48.510550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.007 [2024-12-09 06:07:48.510585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.007 [2024-12-09 06:07:48.510591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.007 [2024-12-09 06:07:48.510595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.007 [2024-12-09 06:07:48.510621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.007 [2024-12-09 06:07:48.510636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.007 [2024-12-09 06:07:48.510672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.007 [2024-12-09 06:07:48.510678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.007 [2024-12-09 06:07:48.510681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.007 [2024-12-09 06:07:48.510694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.007 [2024-12-09 06:07:48.510701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.007 [2024-12-09 06:07:48.510707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.008 [2024-12-09 06:07:48.510722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.008 [2024-12-09 06:07:48.510759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.008 [2024-12-09 06:07:48.510764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.008 [2024-12-09 06:07:48.510768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.510772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.008 [2024-12-09 06:07:48.510780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.510784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.510788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.008 [2024-12-09 06:07:48.510794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.008 [2024-12-09 06:07:48.510807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.008 [2024-12-09 06:07:48.510842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.008 [2024-12-09 06:07:48.510848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.008 [2024-12-09 06:07:48.510851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.510855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.008 [2024-12-09 06:07:48.510864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.510868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.510872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.008 [2024-12-09 06:07:48.510878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.008 [2024-12-09 06:07:48.510892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.008 [2024-12-09 06:07:48.510930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.008 [2024-12-09 06:07:48.510935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.008 [2024-12-09 06:07:48.510939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.510943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.008 [2024-12-09 06:07:48.510951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.510955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.510959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.008 [2024-12-09 06:07:48.510965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.008 [2024-12-09 06:07:48.510979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.008 [2024-12-09 06:07:48.511016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.008 [2024-12-09 06:07:48.511022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.008 [2024-12-09 06:07:48.511026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.511029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.008 [2024-12-09 06:07:48.511038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.511042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.511046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.008 [2024-12-09 06:07:48.511052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.008 [2024-12-09 06:07:48.511066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.008 [2024-12-09 06:07:48.515108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.008 [2024-12-09 06:07:48.515129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.008 [2024-12-09 06:07:48.515134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.515138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.008 [2024-12-09 06:07:48.515148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.515152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.515156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1954750) 01:08:54.008 [2024-12-09 06:07:48.515162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:54.008 [2024-12-09 06:07:48.515182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b8bc0, cid 3, qid 0 01:08:54.008 [2024-12-09 06:07:48.515218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:08:54.008 [2024-12-09 06:07:48.515223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:08:54.008 [2024-12-09 06:07:48.515227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:08:54.008 [2024-12-09 06:07:48.515231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b8bc0) on tqpair=0x1954750 01:08:54.008 [2024-12-09 06:07:48.515238] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 01:08:54.008 0% 01:08:54.008 Data Units Read: 0 01:08:54.008 Data Units Written: 0 01:08:54.008 Host Read Commands: 0 01:08:54.008 Host Write Commands: 0 01:08:54.008 Controller Busy Time: 0 minutes 01:08:54.008 Power Cycles: 0 01:08:54.008 Power On Hours: 0 hours 01:08:54.008 Unsafe Shutdowns: 0 01:08:54.008 Unrecoverable Media Errors: 0 01:08:54.008 Lifetime Error Log Entries: 0 01:08:54.008 Warning Temperature Time: 0 minutes 01:08:54.008 Critical Temperature Time: 0 minutes 01:08:54.008 01:08:54.008 Number of Queues 01:08:54.008 ================ 01:08:54.008 Number of I/O Submission Queues: 127 01:08:54.008 Number of I/O Completion Queues: 127 01:08:54.008 01:08:54.008 Active Namespaces 01:08:54.008 ================= 01:08:54.008 Namespace ID:1 01:08:54.008 Error Recovery Timeout: Unlimited 01:08:54.008 Command Set Identifier: NVM (00h) 01:08:54.008 Deallocate: Supported 01:08:54.008 Deallocated/Unwritten Error: Not Supported 01:08:54.008 Deallocated Read Value: Unknown 01:08:54.008 Deallocate in Write Zeroes: Not Supported 01:08:54.008 Deallocated Guard Field: 0xFFFF 01:08:54.008 Flush: Supported 01:08:54.008 Reservation: Supported 01:08:54.008 Namespace Sharing Capabilities: Multiple Controllers 01:08:54.008 Size (in LBAs): 131072 (0GiB) 01:08:54.008 Capacity (in LBAs): 131072 (0GiB) 01:08:54.008 Utilization (in LBAs): 131072 (0GiB) 01:08:54.008 NGUID: ABCDEF0123456789ABCDEF0123456789 01:08:54.008 EUI64: ABCDEF0123456789 01:08:54.008 UUID: 6f24753a-0c97-4ec4-b4b7-34ca58e2288f 01:08:54.008 Thin Provisioning: Not Supported 01:08:54.008 Per-NS Atomic Units: Yes 01:08:54.008 Atomic Boundary Size (Normal): 0 01:08:54.008 Atomic Boundary Size (PFail): 0 01:08:54.008 Atomic Boundary Offset: 0 01:08:54.008 Maximum Single Source Range Length: 65535 01:08:54.008 Maximum Copy Length: 65535 01:08:54.008 Maximum Source Range Count: 1 01:08:54.008 NGUID/EUI64 Never Reused: No 01:08:54.008 Namespace Write Protected: No 01:08:54.008 Number of LBA Formats: 1 01:08:54.008 Current LBA Format: LBA Format #00 01:08:54.008 LBA Format #00: Data Size: 512 Metadata Size: 0 01:08:54.008 01:08:54.008 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:08:54.268 rmmod nvme_tcp 01:08:54.268 rmmod nvme_fabrics 01:08:54.268 rmmod nvme_keyring 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73659 ']' 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73659 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73659 ']' 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73659 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73659 01:08:54.268 killing process with pid 73659 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73659' 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73659 01:08:54.268 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73659 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:08:54.528 06:07:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:08:54.528 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:08:54.528 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:08:54.528 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:08:54.528 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:08:54.528 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:08:54.528 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:08:54.528 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 01:08:54.788 01:08:54.788 real 0m3.124s 01:08:54.788 user 0m7.021s 01:08:54.788 sys 0m1.003s 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:54.788 ************************************ 01:08:54.788 END TEST nvmf_identify 01:08:54.788 ************************************ 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:08:54.788 ************************************ 01:08:54.788 START TEST nvmf_perf 01:08:54.788 ************************************ 01:08:54.788 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:08:55.048 * Looking for test storage... 01:08:55.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:55.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:55.048 --rc genhtml_branch_coverage=1 01:08:55.048 --rc genhtml_function_coverage=1 01:08:55.048 --rc genhtml_legend=1 01:08:55.048 --rc geninfo_all_blocks=1 01:08:55.048 --rc geninfo_unexecuted_blocks=1 01:08:55.048 01:08:55.048 ' 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:55.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:55.048 --rc genhtml_branch_coverage=1 01:08:55.048 --rc genhtml_function_coverage=1 01:08:55.048 --rc genhtml_legend=1 01:08:55.048 --rc geninfo_all_blocks=1 01:08:55.048 --rc geninfo_unexecuted_blocks=1 01:08:55.048 01:08:55.048 ' 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:55.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:55.048 --rc genhtml_branch_coverage=1 01:08:55.048 --rc genhtml_function_coverage=1 01:08:55.048 --rc genhtml_legend=1 01:08:55.048 --rc geninfo_all_blocks=1 01:08:55.048 --rc geninfo_unexecuted_blocks=1 01:08:55.048 01:08:55.048 ' 01:08:55.048 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:55.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:55.048 --rc genhtml_branch_coverage=1 01:08:55.048 --rc genhtml_function_coverage=1 01:08:55.048 --rc genhtml_legend=1 01:08:55.048 --rc geninfo_all_blocks=1 01:08:55.048 --rc geninfo_unexecuted_blocks=1 01:08:55.048 01:08:55.049 ' 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:08:55.049 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:55.049 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:08:55.327 Cannot find device "nvmf_init_br" 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:08:55.327 Cannot find device "nvmf_init_br2" 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:08:55.327 Cannot find device "nvmf_tgt_br" 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 01:08:55.327 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:08:55.327 Cannot find device "nvmf_tgt_br2" 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:08:55.328 Cannot find device "nvmf_init_br" 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:08:55.328 Cannot find device "nvmf_init_br2" 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:08:55.328 Cannot find device "nvmf_tgt_br" 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:08:55.328 Cannot find device "nvmf_tgt_br2" 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:08:55.328 Cannot find device "nvmf_br" 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:08:55.328 Cannot find device "nvmf_init_if" 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:08:55.328 Cannot find device "nvmf_init_if2" 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:55.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:55.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:08:55.328 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:55.587 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:55.587 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:55.587 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:55.587 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:55.587 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:08:55.587 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:08:55.587 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:08:55.587 06:07:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:08:55.587 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:55.587 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.151 ms 01:08:55.587 01:08:55.587 --- 10.0.0.3 ping statistics --- 01:08:55.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:55.587 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 01:08:55.587 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:08:55.587 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:08:55.587 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 01:08:55.587 01:08:55.587 --- 10.0.0.4 ping statistics --- 01:08:55.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:55.587 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:55.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:55.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 01:08:55.846 01:08:55.846 --- 10.0.0.1 ping statistics --- 01:08:55.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:55.846 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:08:55.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:55.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 01:08:55.846 01:08:55.846 --- 10.0.0.2 ping statistics --- 01:08:55.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:55.846 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=73917 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:08:55.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 73917 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 73917 ']' 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:55.846 06:07:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:08:55.846 [2024-12-09 06:07:50.294061] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:55.846 [2024-12-09 06:07:50.294146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:56.105 [2024-12-09 06:07:50.446068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:08:56.105 [2024-12-09 06:07:50.486499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:56.105 [2024-12-09 06:07:50.486543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:56.105 [2024-12-09 06:07:50.486552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:56.105 [2024-12-09 06:07:50.486560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:56.105 [2024-12-09 06:07:50.486567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:56.105 [2024-12-09 06:07:50.487517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:56.105 [2024-12-09 06:07:50.487612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:08:56.105 [2024-12-09 06:07:50.488766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:08:56.105 [2024-12-09 06:07:50.488769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:56.105 [2024-12-09 06:07:50.531512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:08:56.672 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:56.672 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 01:08:56.672 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:56.672 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:56.672 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:08:56.672 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:56.672 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:08:56.672 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 01:08:57.239 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 01:08:57.239 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 01:08:57.239 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 01:08:57.239 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:08:57.497 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 01:08:57.497 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 01:08:57.497 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 01:08:57.497 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 01:08:57.497 06:07:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:08:57.756 [2024-12-09 06:07:52.172257] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:57.756 06:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:08:58.014 06:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:08:58.014 06:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:08:58.014 06:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:08:58.014 06:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:08:58.273 06:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:58.531 [2024-12-09 06:07:52.948650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:58.531 06:07:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:08:58.789 06:07:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 01:08:58.789 06:07:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:08:58.789 06:07:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 01:08:58.789 06:07:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:08:59.790 Initializing NVMe Controllers 01:08:59.790 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:08:59.790 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:08:59.790 Initialization complete. Launching workers. 01:08:59.790 ======================================================== 01:08:59.790 Latency(us) 01:08:59.790 Device Information : IOPS MiB/s Average min max 01:08:59.790 PCIE (0000:00:10.0) NSID 1 from core 0: 18412.99 71.93 1739.06 550.11 8356.19 01:08:59.790 ======================================================== 01:08:59.790 Total : 18412.99 71.93 1739.06 550.11 8356.19 01:08:59.790 01:08:59.790 06:07:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:09:01.182 Initializing NVMe Controllers 01:09:01.182 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:09:01.182 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:09:01.182 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:09:01.182 Initialization complete. Launching workers. 01:09:01.182 ======================================================== 01:09:01.182 Latency(us) 01:09:01.182 Device Information : IOPS MiB/s Average min max 01:09:01.182 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2609.00 10.19 383.12 104.49 4323.56 01:09:01.182 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8044.61 7971.86 12012.32 01:09:01.182 ======================================================== 01:09:01.182 Total : 2734.00 10.68 733.41 104.49 12012.32 01:09:01.182 01:09:01.182 06:07:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:09:02.559 Initializing NVMe Controllers 01:09:02.559 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:09:02.559 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:09:02.559 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:09:02.559 Initialization complete. Launching workers. 01:09:02.559 ======================================================== 01:09:02.559 Latency(us) 01:09:02.559 Device Information : IOPS MiB/s Average min max 01:09:02.559 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9537.69 37.26 3355.53 520.98 7075.30 01:09:02.559 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4007.73 15.66 8013.73 6975.75 8935.01 01:09:02.559 ======================================================== 01:09:02.559 Total : 13545.42 52.91 4733.77 520.98 8935.01 01:09:02.559 01:09:02.559 06:07:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 01:09:02.559 06:07:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:09:05.096 Initializing NVMe Controllers 01:09:05.096 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:09:05.096 Controller IO queue size 128, less than required. 01:09:05.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:05.096 Controller IO queue size 128, less than required. 01:09:05.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:05.096 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:09:05.096 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:09:05.096 Initialization complete. Launching workers. 01:09:05.096 ======================================================== 01:09:05.096 Latency(us) 01:09:05.096 Device Information : IOPS MiB/s Average min max 01:09:05.096 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2247.95 561.99 57852.49 30170.93 106436.84 01:09:05.096 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 699.98 175.00 196959.66 29547.94 322878.82 01:09:05.096 ======================================================== 01:09:05.096 Total : 2947.94 736.98 90883.37 29547.94 322878.82 01:09:05.096 01:09:05.096 06:07:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 01:09:05.355 Initializing NVMe Controllers 01:09:05.355 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:09:05.355 Controller IO queue size 128, less than required. 01:09:05.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:05.355 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 01:09:05.355 Controller IO queue size 128, less than required. 01:09:05.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:05.355 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 01:09:05.355 WARNING: Some requested NVMe devices were skipped 01:09:05.355 No valid NVMe controllers or AIO or URING devices found 01:09:05.356 06:07:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 01:09:07.891 Initializing NVMe Controllers 01:09:07.891 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:09:07.891 Controller IO queue size 128, less than required. 01:09:07.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:07.891 Controller IO queue size 128, less than required. 01:09:07.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:07.891 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:09:07.891 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:09:07.891 Initialization complete. Launching workers. 01:09:07.891 01:09:07.891 ==================== 01:09:07.891 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 01:09:07.891 TCP transport: 01:09:07.891 polls: 12675 01:09:07.891 idle_polls: 8563 01:09:07.891 sock_completions: 4112 01:09:07.891 nvme_completions: 6623 01:09:07.891 submitted_requests: 10044 01:09:07.891 queued_requests: 1 01:09:07.891 01:09:07.891 ==================== 01:09:07.891 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 01:09:07.891 TCP transport: 01:09:07.891 polls: 13230 01:09:07.891 idle_polls: 7590 01:09:07.891 sock_completions: 5640 01:09:07.891 nvme_completions: 7375 01:09:07.891 submitted_requests: 11164 01:09:07.891 queued_requests: 1 01:09:07.891 ======================================================== 01:09:07.891 Latency(us) 01:09:07.891 Device Information : IOPS MiB/s Average min max 01:09:07.891 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1655.40 413.85 78131.89 36525.64 141277.83 01:09:07.891 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1843.38 460.85 70125.49 32853.64 106809.98 01:09:07.891 ======================================================== 01:09:07.891 Total : 3498.78 874.69 73913.60 32853.64 141277.83 01:09:07.891 01:09:07.891 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 01:09:07.891 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:09:08.150 rmmod nvme_tcp 01:09:08.150 rmmod nvme_fabrics 01:09:08.150 rmmod nvme_keyring 01:09:08.150 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 73917 ']' 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 73917 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 73917 ']' 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 73917 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73917 01:09:08.408 killing process with pid 73917 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73917' 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 73917 01:09:08.408 06:08:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 73917 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:09:08.975 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 01:09:09.251 ************************************ 01:09:09.251 END TEST nvmf_perf 01:09:09.251 ************************************ 01:09:09.251 01:09:09.251 real 0m14.333s 01:09:09.251 user 0m49.966s 01:09:09.251 sys 0m4.215s 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:09:09.251 ************************************ 01:09:09.251 START TEST nvmf_fio_host 01:09:09.251 ************************************ 01:09:09.251 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:09:09.511 * Looking for test storage... 01:09:09.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:09.511 --rc genhtml_branch_coverage=1 01:09:09.511 --rc genhtml_function_coverage=1 01:09:09.511 --rc genhtml_legend=1 01:09:09.511 --rc geninfo_all_blocks=1 01:09:09.511 --rc geninfo_unexecuted_blocks=1 01:09:09.511 01:09:09.511 ' 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:09.511 --rc genhtml_branch_coverage=1 01:09:09.511 --rc genhtml_function_coverage=1 01:09:09.511 --rc genhtml_legend=1 01:09:09.511 --rc geninfo_all_blocks=1 01:09:09.511 --rc geninfo_unexecuted_blocks=1 01:09:09.511 01:09:09.511 ' 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:09.511 --rc genhtml_branch_coverage=1 01:09:09.511 --rc genhtml_function_coverage=1 01:09:09.511 --rc genhtml_legend=1 01:09:09.511 --rc geninfo_all_blocks=1 01:09:09.511 --rc geninfo_unexecuted_blocks=1 01:09:09.511 01:09:09.511 ' 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:09.511 --rc genhtml_branch_coverage=1 01:09:09.511 --rc genhtml_function_coverage=1 01:09:09.511 --rc genhtml_legend=1 01:09:09.511 --rc geninfo_all_blocks=1 01:09:09.511 --rc geninfo_unexecuted_blocks=1 01:09:09.511 01:09:09.511 ' 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:09.511 06:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:09:09.511 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:09.511 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:09:09.512 Cannot find device "nvmf_init_br" 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 01:09:09.512 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:09:09.771 Cannot find device "nvmf_init_br2" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:09:09.771 Cannot find device "nvmf_tgt_br" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:09:09.771 Cannot find device "nvmf_tgt_br2" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:09:09.771 Cannot find device "nvmf_init_br" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:09:09.771 Cannot find device "nvmf_init_br2" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:09:09.771 Cannot find device "nvmf_tgt_br" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:09:09.771 Cannot find device "nvmf_tgt_br2" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:09:09.771 Cannot find device "nvmf_br" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:09:09.771 Cannot find device "nvmf_init_if" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:09:09.771 Cannot find device "nvmf_init_if2" 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:09.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:09.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:09.771 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:09:10.030 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:10.030 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 01:09:10.030 01:09:10.030 --- 10.0.0.3 ping statistics --- 01:09:10.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:10.030 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:09:10.030 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:09:10.030 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 01:09:10.030 01:09:10.030 --- 10.0.0.4 ping statistics --- 01:09:10.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:10.030 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:10.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:10.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 01:09:10.030 01:09:10.030 --- 10.0.0.1 ping statistics --- 01:09:10.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:10.030 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:09:10.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:10.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 01:09:10.030 01:09:10.030 --- 10.0.0.2 ping statistics --- 01:09:10.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:10.030 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:09:10.030 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74382 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74382 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74382 ']' 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:10.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:10.289 06:08:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:09:10.289 [2024-12-09 06:08:04.676791] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:09:10.289 [2024-12-09 06:08:04.676855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:10.289 [2024-12-09 06:08:04.829643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:09:10.547 [2024-12-09 06:08:04.876746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:10.547 [2024-12-09 06:08:04.876786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:10.547 [2024-12-09 06:08:04.876795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:10.547 [2024-12-09 06:08:04.876803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:10.547 [2024-12-09 06:08:04.876810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:10.547 [2024-12-09 06:08:04.877726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:09:10.547 [2024-12-09 06:08:04.880911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:09:10.547 [2024-12-09 06:08:04.881100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:09:10.547 [2024-12-09 06:08:04.881123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:09:10.547 [2024-12-09 06:08:04.923710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:09:11.113 06:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:11.113 06:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 01:09:11.113 06:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:09:11.372 [2024-12-09 06:08:05.726282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:11.372 06:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 01:09:11.372 06:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:11.372 06:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:09:11.372 06:08:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 01:09:11.630 Malloc1 01:09:11.630 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:09:11.889 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:09:11.889 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:09:12.148 [2024-12-09 06:08:06.601122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:09:12.148 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:09:12.407 06:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:09:12.666 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:09:12.666 fio-3.35 01:09:12.666 Starting 1 thread 01:09:15.202 01:09:15.202 test: (groupid=0, jobs=1): err= 0: pid=74459: Mon Dec 9 06:08:09 2024 01:09:15.202 read: IOPS=10.7k, BW=42.0MiB/s (44.0MB/s)(84.2MiB/2007msec) 01:09:15.202 slat (nsec): min=1508, max=389340, avg=1701.32, stdev=3610.57 01:09:15.202 clat (usec): min=3126, max=12449, avg=6228.39, stdev=699.79 01:09:15.202 lat (usec): min=3189, max=12451, avg=6230.09, stdev=699.76 01:09:15.202 clat percentiles (usec): 01:09:15.202 | 1.00th=[ 4490], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 01:09:15.202 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 01:09:15.202 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7111], 95.00th=[ 7308], 01:09:15.202 | 99.00th=[ 7898], 99.50th=[ 8717], 99.90th=[11338], 99.95th=[11994], 01:09:15.202 | 99.99th=[12387] 01:09:15.202 bw ( KiB/s): min=39153, max=45912, per=100.00%, avg=42972.25, stdev=2945.93, samples=4 01:09:15.202 iops : min= 9788, max=11478, avg=10743.00, stdev=736.59, samples=4 01:09:15.202 write: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(84.1MiB/2007msec); 0 zone resets 01:09:15.202 slat (nsec): min=1546, max=294375, avg=1727.74, stdev=2200.99 01:09:15.202 clat (usec): min=2967, max=12337, avg=5649.29, stdev=641.89 01:09:15.202 lat (usec): min=2983, max=12339, avg=5651.01, stdev=641.93 01:09:15.202 clat percentiles (usec): 01:09:15.202 | 1.00th=[ 4080], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5211], 01:09:15.202 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5538], 60.00th=[ 5669], 01:09:15.202 | 70.00th=[ 5932], 80.00th=[ 6128], 90.00th=[ 6456], 95.00th=[ 6652], 01:09:15.202 | 99.00th=[ 7242], 99.50th=[ 8029], 99.90th=[10814], 99.95th=[11469], 01:09:15.202 | 99.99th=[12256] 01:09:15.202 bw ( KiB/s): min=38834, max=45072, per=100.00%, avg=42908.50, stdev=2878.68, samples=4 01:09:15.202 iops : min= 9708, max=11268, avg=10727.00, stdev=719.91, samples=4 01:09:15.202 lat (msec) : 4=0.55%, 10=99.21%, 20=0.24% 01:09:15.202 cpu : usr=69.24%, sys=25.07%, ctx=12, majf=0, minf=7 01:09:15.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:09:15.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:15.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:09:15.202 issued rwts: total=21561,21526,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:15.202 latency : target=0, window=0, percentile=100.00%, depth=128 01:09:15.202 01:09:15.202 Run status group 0 (all jobs): 01:09:15.202 READ: bw=42.0MiB/s (44.0MB/s), 42.0MiB/s-42.0MiB/s (44.0MB/s-44.0MB/s), io=84.2MiB (88.3MB), run=2007-2007msec 01:09:15.202 WRITE: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=84.1MiB (88.2MB), run=2007-2007msec 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:09:15.202 06:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:09:15.202 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 01:09:15.202 fio-3.35 01:09:15.202 Starting 1 thread 01:09:17.748 01:09:17.748 test: (groupid=0, jobs=1): err= 0: pid=74508: Mon Dec 9 06:08:11 2024 01:09:17.749 read: IOPS=10.0k, BW=157MiB/s (164MB/s)(314MiB/2006msec) 01:09:17.749 slat (nsec): min=2394, max=86901, avg=2649.08, stdev=1370.53 01:09:17.749 clat (usec): min=1670, max=15615, avg=7425.49, stdev=2055.65 01:09:17.749 lat (usec): min=1673, max=15618, avg=7428.14, stdev=2055.75 01:09:17.749 clat percentiles (usec): 01:09:17.749 | 1.00th=[ 3326], 5.00th=[ 4047], 10.00th=[ 4621], 20.00th=[ 5669], 01:09:17.749 | 30.00th=[ 6325], 40.00th=[ 6915], 50.00th=[ 7373], 60.00th=[ 7963], 01:09:17.749 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10159], 95.00th=[10814], 01:09:17.749 | 99.00th=[12387], 99.50th=[13042], 99.90th=[14091], 99.95th=[14222], 01:09:17.749 | 99.99th=[15401] 01:09:17.749 bw ( KiB/s): min=72672, max=90944, per=49.37%, avg=79152.00, stdev=8164.31, samples=4 01:09:17.749 iops : min= 4542, max= 5684, avg=4947.00, stdev=510.27, samples=4 01:09:17.749 write: IOPS=5758, BW=90.0MiB/s (94.4MB/s)(162MiB/1800msec); 0 zone resets 01:09:17.749 slat (usec): min=27, max=373, avg=29.34, stdev= 7.75 01:09:17.749 clat (usec): min=4557, max=18266, avg=9679.14, stdev=2026.81 01:09:17.749 lat (usec): min=4585, max=18294, avg=9708.48, stdev=2028.65 01:09:17.749 clat percentiles (usec): 01:09:17.749 | 1.00th=[ 5997], 5.00th=[ 6849], 10.00th=[ 7373], 20.00th=[ 7963], 01:09:17.749 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9896], 01:09:17.749 | 70.00th=[10552], 80.00th=[11469], 90.00th=[12518], 95.00th=[13304], 01:09:17.749 | 99.00th=[15139], 99.50th=[16057], 99.90th=[17171], 99.95th=[17957], 01:09:17.749 | 99.99th=[18220] 01:09:17.749 bw ( KiB/s): min=75072, max=93312, per=89.49%, avg=82456.00, stdev=7744.94, samples=4 01:09:17.749 iops : min= 4692, max= 5832, avg=5153.50, stdev=484.06, samples=4 01:09:17.749 lat (msec) : 2=0.05%, 4=2.85%, 10=76.35%, 20=20.75% 01:09:17.749 cpu : usr=79.35%, sys=16.91%, ctx=5, majf=0, minf=14 01:09:17.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 01:09:17.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:09:17.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:09:17.749 issued rwts: total=20101,10366,0,0 short=0,0,0,0 dropped=0,0,0,0 01:09:17.749 latency : target=0, window=0, percentile=100.00%, depth=128 01:09:17.749 01:09:17.749 Run status group 0 (all jobs): 01:09:17.749 READ: bw=157MiB/s (164MB/s), 157MiB/s-157MiB/s (164MB/s-164MB/s), io=314MiB (329MB), run=2006-2006msec 01:09:17.749 WRITE: bw=90.0MiB/s (94.4MB/s), 90.0MiB/s-90.0MiB/s (94.4MB/s-94.4MB/s), io=162MiB (170MB), run=1800-1800msec 01:09:17.749 06:08:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:09:17.749 rmmod nvme_tcp 01:09:17.749 rmmod nvme_fabrics 01:09:17.749 rmmod nvme_keyring 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74382 ']' 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74382 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74382 ']' 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74382 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74382 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:09:17.749 killing process with pid 74382 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74382' 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74382 01:09:17.749 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74382 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:09:18.009 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 01:09:18.269 01:09:18.269 real 0m9.092s 01:09:18.269 user 0m34.446s 01:09:18.269 sys 0m2.869s 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:18.269 06:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:09:18.269 ************************************ 01:09:18.269 END TEST nvmf_fio_host 01:09:18.269 ************************************ 01:09:18.528 06:08:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:09:18.528 06:08:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:09:18.528 06:08:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:18.528 06:08:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:09:18.528 ************************************ 01:09:18.528 START TEST nvmf_failover 01:09:18.528 ************************************ 01:09:18.528 06:08:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:09:18.528 * Looking for test storage... 01:09:18.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:09:18.528 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:18.528 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 01:09:18.528 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:18.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:18.789 --rc genhtml_branch_coverage=1 01:09:18.789 --rc genhtml_function_coverage=1 01:09:18.789 --rc genhtml_legend=1 01:09:18.789 --rc geninfo_all_blocks=1 01:09:18.789 --rc geninfo_unexecuted_blocks=1 01:09:18.789 01:09:18.789 ' 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:18.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:18.789 --rc genhtml_branch_coverage=1 01:09:18.789 --rc genhtml_function_coverage=1 01:09:18.789 --rc genhtml_legend=1 01:09:18.789 --rc geninfo_all_blocks=1 01:09:18.789 --rc geninfo_unexecuted_blocks=1 01:09:18.789 01:09:18.789 ' 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:18.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:18.789 --rc genhtml_branch_coverage=1 01:09:18.789 --rc genhtml_function_coverage=1 01:09:18.789 --rc genhtml_legend=1 01:09:18.789 --rc geninfo_all_blocks=1 01:09:18.789 --rc geninfo_unexecuted_blocks=1 01:09:18.789 01:09:18.789 ' 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:18.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:18.789 --rc genhtml_branch_coverage=1 01:09:18.789 --rc genhtml_function_coverage=1 01:09:18.789 --rc genhtml_legend=1 01:09:18.789 --rc geninfo_all_blocks=1 01:09:18.789 --rc geninfo_unexecuted_blocks=1 01:09:18.789 01:09:18.789 ' 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:18.789 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:09:18.790 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:09:18.790 Cannot find device "nvmf_init_br" 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:09:18.790 Cannot find device "nvmf_init_br2" 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:09:18.790 Cannot find device "nvmf_tgt_br" 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:09:18.790 Cannot find device "nvmf_tgt_br2" 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:09:18.790 Cannot find device "nvmf_init_br" 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:09:18.790 Cannot find device "nvmf_init_br2" 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:09:18.790 Cannot find device "nvmf_tgt_br" 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 01:09:18.790 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:09:19.050 Cannot find device "nvmf_tgt_br2" 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:09:19.050 Cannot find device "nvmf_br" 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:09:19.050 Cannot find device "nvmf_init_if" 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:09:19.050 Cannot find device "nvmf_init_if2" 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:19.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:19.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:19.050 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:19.051 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:09:19.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:19.311 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 01:09:19.311 01:09:19.311 --- 10.0.0.3 ping statistics --- 01:09:19.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:19.311 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:09:19.311 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:09:19.311 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 01:09:19.311 01:09:19.311 --- 10.0.0.4 ping statistics --- 01:09:19.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:19.311 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:19.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:19.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 01:09:19.311 01:09:19.311 --- 10.0.0.1 ping statistics --- 01:09:19.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:19.311 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:09:19.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:19.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 01:09:19.311 01:09:19.311 --- 10.0.0.2 ping statistics --- 01:09:19.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:19.311 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74781 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74781 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74781 ']' 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:19.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:19.311 06:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:09:19.311 [2024-12-09 06:08:13.873479] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:09:19.311 [2024-12-09 06:08:13.873540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:19.570 [2024-12-09 06:08:14.010453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:09:19.570 [2024-12-09 06:08:14.068462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:19.570 [2024-12-09 06:08:14.068508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:19.570 [2024-12-09 06:08:14.068518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:19.571 [2024-12-09 06:08:14.068525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:19.571 [2024-12-09 06:08:14.068532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:19.571 [2024-12-09 06:08:14.070176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:09:19.571 [2024-12-09 06:08:14.070274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:09:19.571 [2024-12-09 06:08:14.070276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:09:19.571 [2024-12-09 06:08:14.151166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:09:20.509 06:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:20.509 06:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:09:20.509 06:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:09:20.509 06:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:20.509 06:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:09:20.509 06:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:20.509 06:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:09:20.509 [2024-12-09 06:08:14.998699] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:20.509 06:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:09:20.767 Malloc0 01:09:20.767 06:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:09:21.025 06:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:09:21.284 06:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:09:21.284 [2024-12-09 06:08:15.845268] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:09:21.284 06:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:09:21.545 [2024-12-09 06:08:16.049152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:09:21.545 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:09:21.803 [2024-12-09 06:08:16.253033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 01:09:21.803 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74834 01:09:21.803 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 01:09:21.804 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:09:21.804 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74834 /var/tmp/bdevperf.sock 01:09:21.804 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74834 ']' 01:09:21.804 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:09:21.804 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:21.804 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:09:21.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:09:21.804 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:21.804 06:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:09:22.739 06:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:22.739 06:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:09:22.739 06:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:09:22.998 NVMe0n1 01:09:22.998 06:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:09:23.257 01:09:23.257 06:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:09:23.257 06:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74858 01:09:23.257 06:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 01:09:24.191 06:08:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:09:24.450 06:08:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 01:09:27.737 06:08:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:09:27.737 01:09:27.737 06:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:09:27.996 [2024-12-09 06:08:22.407777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2126930 is same with the state(6) to be set 01:09:27.996 [2024-12-09 06:08:22.408047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2126930 is same with the state(6) to be set 01:09:27.996 [2024-12-09 06:08:22.408061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2126930 is same with the state(6) to be set 01:09:27.996 [2024-12-09 06:08:22.408071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2126930 is same with the state(6) to be set 01:09:27.996 [2024-12-09 06:08:22.408079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2126930 is same with the state(6) to be set 01:09:27.996 06:08:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 01:09:31.290 06:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:09:31.290 [2024-12-09 06:08:25.617893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:09:31.290 06:08:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 01:09:32.226 06:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:09:32.484 06:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 74858 01:09:39.104 { 01:09:39.104 "results": [ 01:09:39.104 { 01:09:39.104 "job": "NVMe0n1", 01:09:39.104 "core_mask": "0x1", 01:09:39.104 "workload": "verify", 01:09:39.104 "status": "finished", 01:09:39.104 "verify_range": { 01:09:39.104 "start": 0, 01:09:39.104 "length": 16384 01:09:39.104 }, 01:09:39.104 "queue_depth": 128, 01:09:39.104 "io_size": 4096, 01:09:39.104 "runtime": 15.009704, 01:09:39.104 "iops": 10089.939148700068, 01:09:39.104 "mibps": 39.41382479960964, 01:09:39.104 "io_failed": 4397, 01:09:39.104 "io_timeout": 0, 01:09:39.104 "avg_latency_us": 12305.624340023269, 01:09:39.104 "min_latency_us": 424.40481927710846, 01:09:39.104 "max_latency_us": 15475.971084337349 01:09:39.104 } 01:09:39.104 ], 01:09:39.104 "core_count": 1 01:09:39.104 } 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74834 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74834 ']' 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74834 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74834 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:09:39.104 killing process with pid 74834 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74834' 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74834 01:09:39.104 06:08:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74834 01:09:39.104 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:09:39.104 [2024-12-09 06:08:16.319237] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:09:39.104 [2024-12-09 06:08:16.319647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74834 ] 01:09:39.104 [2024-12-09 06:08:16.472802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:39.104 [2024-12-09 06:08:16.513494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:09:39.104 [2024-12-09 06:08:16.555103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:09:39.104 Running I/O for 15 seconds... 01:09:39.104 9677.00 IOPS, 37.80 MiB/s [2024-12-09T06:08:33.691Z] [2024-12-09 06:08:18.918028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.104 [2024-12-09 06:08:18.918645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.104 [2024-12-09 06:08:18.918672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.104 [2024-12-09 06:08:18.918699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.104 [2024-12-09 06:08:18.918725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.104 [2024-12-09 06:08:18.918751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.104 [2024-12-09 06:08:18.918781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.104 [2024-12-09 06:08:18.918808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.104 [2024-12-09 06:08:18.918843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.104 [2024-12-09 06:08:18.918857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.104 [2024-12-09 06:08:18.918870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.918883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.918897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.918911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.918924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.918937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.918950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.918964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.918977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.918991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.105 [2024-12-09 06:08:18.919830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.105 [2024-12-09 06:08:18.919856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.105 [2024-12-09 06:08:18.919878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.919890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.919902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.919914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.919928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.919940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.919955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.919967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.919980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.919993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.106 [2024-12-09 06:08:18.920833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.106 [2024-12-09 06:08:18.920888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.106 [2024-12-09 06:08:18.920902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.920915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.920933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.920947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.920961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.920973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.920987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:18.921531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.107 [2024-12-09 06:08:18.921559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.107 [2024-12-09 06:08:18.921586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.107 [2024-12-09 06:08:18.921616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.107 [2024-12-09 06:08:18.921643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.107 [2024-12-09 06:08:18.921677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.107 [2024-12-09 06:08:18.921708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.107 [2024-12-09 06:08:18.921735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:09:39.107 [2024-12-09 06:08:18.921780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:09:39.107 [2024-12-09 06:08:18.921791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90272 len:8 PRP1 0x0 PRP2 0x0 01:09:39.107 [2024-12-09 06:08:18.921805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921873] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 01:09:39.107 [2024-12-09 06:08:18.921924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.107 [2024-12-09 06:08:18.921940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.107 [2024-12-09 06:08:18.921969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.921985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.107 [2024-12-09 06:08:18.921998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.922012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.107 [2024-12-09 06:08:18.922026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:18.922039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:09:39.107 [2024-12-09 06:08:18.924742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:09:39.107 [2024-12-09 06:08:18.924780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1217c60 (9): Bad file descriptor 01:09:39.107 [2024-12-09 06:08:18.946433] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:09:39.107 10028.00 IOPS, 39.17 MiB/s [2024-12-09T06:08:33.694Z] 10271.00 IOPS, 40.12 MiB/s [2024-12-09T06:08:33.694Z] 10768.50 IOPS, 42.06 MiB/s [2024-12-09T06:08:33.694Z] [2024-12-09 06:08:22.408156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.107 [2024-12-09 06:08:22.408207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.107 [2024-12-09 06:08:22.408228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.408795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.408983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.408997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.108 [2024-12-09 06:08:22.409010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.409023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.409036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.409049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.409062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.108 [2024-12-09 06:08:22.409075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.108 [2024-12-09 06:08:22.409099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.109 [2024-12-09 06:08:22.409498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.109 [2024-12-09 06:08:22.409525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.109 [2024-12-09 06:08:22.409551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.109 [2024-12-09 06:08:22.409578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.109 [2024-12-09 06:08:22.409604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.109 [2024-12-09 06:08:22.409631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.109 [2024-12-09 06:08:22.409662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.109 [2024-12-09 06:08:22.409689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.409980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.409995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.410007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.410022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.109 [2024-12-09 06:08:22.410035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.109 [2024-12-09 06:08:22.410049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.110 [2024-12-09 06:08:22.410867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.410972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.410986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.411003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.411017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.411029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.411043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.411055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.411069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.411081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.110 [2024-12-09 06:08:22.411103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.110 [2024-12-09 06:08:22.411115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:22.411142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:22.411168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:22.411196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:22.411226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:22.411253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:22.411279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:22.411305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.111 [2024-12-09 06:08:22.411652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:09:39.111 [2024-12-09 06:08:22.411697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:09:39.111 [2024-12-09 06:08:22.411712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:8 PRP1 0x0 PRP2 0x0 01:09:39.111 [2024-12-09 06:08:22.411725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411776] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 01:09:39.111 [2024-12-09 06:08:22.411822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.111 [2024-12-09 06:08:22.411837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.111 [2024-12-09 06:08:22.411862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.111 [2024-12-09 06:08:22.411888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.111 [2024-12-09 06:08:22.411913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:22.411926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:09:39.111 [2024-12-09 06:08:22.414596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:09:39.111 [2024-12-09 06:08:22.414632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1217c60 (9): Bad file descriptor 01:09:39.111 [2024-12-09 06:08:22.444591] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 01:09:39.111 10943.20 IOPS, 42.75 MiB/s [2024-12-09T06:08:33.698Z] 11134.00 IOPS, 43.49 MiB/s [2024-12-09T06:08:33.698Z] 11262.29 IOPS, 43.99 MiB/s [2024-12-09T06:08:33.698Z] 11376.50 IOPS, 44.44 MiB/s [2024-12-09T06:08:33.698Z] 11459.78 IOPS, 44.76 MiB/s [2024-12-09T06:08:33.698Z] [2024-12-09 06:08:26.827627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:26.827676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:26.827712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:26.827726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:26.827740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:26.827753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:26.827768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:26.827780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:26.827794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:26.827806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:26.827840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:26.827853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.111 [2024-12-09 06:08:26.827867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.111 [2024-12-09 06:08:26.827879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.827892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.827905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.827919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.827932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.827946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.827958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.827972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.827984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.827998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.112 [2024-12-09 06:08:26.828345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.112 [2024-12-09 06:08:26.828865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.112 [2024-12-09 06:08:26.828878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.828892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.828905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.828918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.828932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.828945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.828957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.828971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.828984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.828997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.113 [2024-12-09 06:08:26.829750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.113 [2024-12-09 06:08:26.829764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.113 [2024-12-09 06:08:26.829777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.829792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.829804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.829817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.829830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.829844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.829856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.829870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.829887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.829901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.829914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.829928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.829941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.829955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.829968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.829981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.829994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.114 [2024-12-09 06:08:26.830300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.114 [2024-12-09 06:08:26.830327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.114 [2024-12-09 06:08:26.830359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.114 [2024-12-09 06:08:26.830387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.114 [2024-12-09 06:08:26.830413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.114 [2024-12-09 06:08:26.830439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.114 [2024-12-09 06:08:26.830465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.114 [2024-12-09 06:08:26.830491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.114 [2024-12-09 06:08:26.830801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.114 [2024-12-09 06:08:26.830814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.830827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.115 [2024-12-09 06:08:26.830840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.830853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.115 [2024-12-09 06:08:26.830866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.830879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.115 [2024-12-09 06:08:26.830892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.830906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.115 [2024-12-09 06:08:26.830923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.830937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:09:39.115 [2024-12-09 06:08:26.830949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.830963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.115 [2024-12-09 06:08:26.830976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.830990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.115 [2024-12-09 06:08:26.831002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.115 [2024-12-09 06:08:26.831028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.115 [2024-12-09 06:08:26.831055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.115 [2024-12-09 06:08:26.831081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.115 [2024-12-09 06:08:26.831115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:09:39.115 [2024-12-09 06:08:26.831142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:09:39.115 [2024-12-09 06:08:26.831193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:09:39.115 [2024-12-09 06:08:26.831203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67528 len:8 PRP1 0x0 PRP2 0x0 01:09:39.115 [2024-12-09 06:08:26.831215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831269] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 01:09:39.115 [2024-12-09 06:08:26.831316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.115 [2024-12-09 06:08:26.831331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.115 [2024-12-09 06:08:26.831357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.115 [2024-12-09 06:08:26.831389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:09:39.115 [2024-12-09 06:08:26.831415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:39.115 [2024-12-09 06:08:26.831428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:09:39.115 [2024-12-09 06:08:26.834102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:09:39.115 [2024-12-09 06:08:26.834138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1217c60 (9): Bad file descriptor 01:09:39.115 [2024-12-09 06:08:26.859518] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 01:09:39.115 11087.70 IOPS, 43.31 MiB/s [2024-12-09T06:08:33.702Z] 10787.45 IOPS, 42.14 MiB/s [2024-12-09T06:08:33.702Z] 10547.92 IOPS, 41.20 MiB/s [2024-12-09T06:08:33.702Z] 10345.15 IOPS, 40.41 MiB/s [2024-12-09T06:08:33.702Z] 10153.21 IOPS, 39.66 MiB/s [2024-12-09T06:08:33.702Z] 10087.93 IOPS, 39.41 MiB/s 01:09:39.115 Latency(us) 01:09:39.115 [2024-12-09T06:08:33.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:39.115 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:09:39.115 Verification LBA range: start 0x0 length 0x4000 01:09:39.115 NVMe0n1 : 15.01 10089.94 39.41 292.94 0.00 12305.62 424.40 15475.97 01:09:39.115 [2024-12-09T06:08:33.702Z] =================================================================================================================== 01:09:39.115 [2024-12-09T06:08:33.702Z] Total : 10089.94 39.41 292.94 0.00 12305.62 424.40 15475.97 01:09:39.115 Received shutdown signal, test time was about 15.000000 seconds 01:09:39.115 01:09:39.115 Latency(us) 01:09:39.115 [2024-12-09T06:08:33.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:39.115 [2024-12-09T06:08:33.702Z] =================================================================================================================== 01:09:39.115 [2024-12-09T06:08:33.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75035 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75035 /var/tmp/bdevperf.sock 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75035 ']' 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:39.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:39.115 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:09:39.375 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:39.375 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:09:39.375 06:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:09:39.634 [2024-12-09 06:08:34.115462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:09:39.634 06:08:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:09:39.893 [2024-12-09 06:08:34.315348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 01:09:39.893 06:08:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:09:40.152 NVMe0n1 01:09:40.152 06:08:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:09:40.411 01:09:40.411 06:08:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:09:40.669 01:09:40.669 06:08:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 01:09:40.669 06:08:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:09:40.928 06:08:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:09:40.928 06:08:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 01:09:44.272 06:08:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:09:44.272 06:08:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 01:09:44.272 06:08:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75112 01:09:44.272 06:08:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:09:44.272 06:08:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75112 01:09:45.650 { 01:09:45.650 "results": [ 01:09:45.650 { 01:09:45.650 "job": "NVMe0n1", 01:09:45.650 "core_mask": "0x1", 01:09:45.650 "workload": "verify", 01:09:45.650 "status": "finished", 01:09:45.650 "verify_range": { 01:09:45.650 "start": 0, 01:09:45.650 "length": 16384 01:09:45.650 }, 01:09:45.650 "queue_depth": 128, 01:09:45.650 "io_size": 4096, 01:09:45.650 "runtime": 1.003933, 01:09:45.650 "iops": 7627.999079619855, 01:09:45.650 "mibps": 29.79687140476506, 01:09:45.650 "io_failed": 0, 01:09:45.650 "io_timeout": 0, 01:09:45.650 "avg_latency_us": 16729.457382625304, 01:09:45.650 "min_latency_us": 1434.4224899598394, 01:09:45.650 "max_latency_us": 16634.036947791166 01:09:45.650 } 01:09:45.650 ], 01:09:45.650 "core_count": 1 01:09:45.650 } 01:09:45.651 06:08:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:09:45.651 [2024-12-09 06:08:33.076040] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:09:45.651 [2024-12-09 06:08:33.076148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75035 ] 01:09:45.651 [2024-12-09 06:08:33.210106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:45.651 [2024-12-09 06:08:33.253872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:09:45.651 [2024-12-09 06:08:33.295568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:09:45.651 [2024-12-09 06:08:35.495151] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 01:09:45.651 [2024-12-09 06:08:35.495238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:09:45.651 [2024-12-09 06:08:35.495257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:45.651 [2024-12-09 06:08:35.495273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:09:45.651 [2024-12-09 06:08:35.495285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:45.651 [2024-12-09 06:08:35.495299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:09:45.651 [2024-12-09 06:08:35.495311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:45.651 [2024-12-09 06:08:35.495324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:09:45.651 [2024-12-09 06:08:35.495336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:45.651 [2024-12-09 06:08:35.495349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 01:09:45.651 [2024-12-09 06:08:35.495386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 01:09:45.651 [2024-12-09 06:08:35.495408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fcc60 (9): Bad file descriptor 01:09:45.651 [2024-12-09 06:08:35.499591] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 01:09:45.651 Running I/O for 1 seconds... 01:09:45.651 7530.00 IOPS, 29.41 MiB/s 01:09:45.651 Latency(us) 01:09:45.651 [2024-12-09T06:08:40.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:45.651 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:09:45.651 Verification LBA range: start 0x0 length 0x4000 01:09:45.651 NVMe0n1 : 1.00 7628.00 29.80 0.00 0.00 16729.46 1434.42 16634.04 01:09:45.651 [2024-12-09T06:08:40.238Z] =================================================================================================================== 01:09:45.651 [2024-12-09T06:08:40.238Z] Total : 7628.00 29.80 0.00 0.00 16729.46 1434.42 16634.04 01:09:45.651 06:08:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:09:45.651 06:08:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 01:09:45.651 06:08:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:09:45.911 06:08:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 01:09:45.911 06:08:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:09:45.911 06:08:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:09:46.170 06:08:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75035 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75035 ']' 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75035 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75035 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75035' 01:09:49.462 killing process with pid 75035 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75035 01:09:49.462 06:08:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75035 01:09:49.720 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 01:09:49.720 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:49.720 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 01:09:49.721 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:09:49.980 rmmod nvme_tcp 01:09:49.980 rmmod nvme_fabrics 01:09:49.980 rmmod nvme_keyring 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74781 ']' 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74781 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74781 ']' 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74781 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74781 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:09:49.980 killing process with pid 74781 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74781' 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74781 01:09:49.980 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74781 01:09:50.240 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:09:50.240 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:09:50.240 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:09:50.240 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 01:09:50.240 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:09:50.241 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:50.501 06:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:50.501 06:08:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 01:09:50.501 ************************************ 01:09:50.501 END TEST nvmf_failover 01:09:50.501 ************************************ 01:09:50.501 01:09:50.501 real 0m32.095s 01:09:50.501 user 2m0.085s 01:09:50.501 sys 0m6.416s 01:09:50.501 06:08:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:50.501 06:08:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:09:50.760 ************************************ 01:09:50.760 START TEST nvmf_host_discovery 01:09:50.760 ************************************ 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:09:50.760 * Looking for test storage... 01:09:50.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:50.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:50.760 --rc genhtml_branch_coverage=1 01:09:50.760 --rc genhtml_function_coverage=1 01:09:50.760 --rc genhtml_legend=1 01:09:50.760 --rc geninfo_all_blocks=1 01:09:50.760 --rc geninfo_unexecuted_blocks=1 01:09:50.760 01:09:50.760 ' 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:50.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:50.760 --rc genhtml_branch_coverage=1 01:09:50.760 --rc genhtml_function_coverage=1 01:09:50.760 --rc genhtml_legend=1 01:09:50.760 --rc geninfo_all_blocks=1 01:09:50.760 --rc geninfo_unexecuted_blocks=1 01:09:50.760 01:09:50.760 ' 01:09:50.760 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:50.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:50.760 --rc genhtml_branch_coverage=1 01:09:50.761 --rc genhtml_function_coverage=1 01:09:50.761 --rc genhtml_legend=1 01:09:50.761 --rc geninfo_all_blocks=1 01:09:50.761 --rc geninfo_unexecuted_blocks=1 01:09:50.761 01:09:50.761 ' 01:09:50.761 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:50.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:50.761 --rc genhtml_branch_coverage=1 01:09:50.761 --rc genhtml_function_coverage=1 01:09:50.761 --rc genhtml_legend=1 01:09:50.761 --rc geninfo_all_blocks=1 01:09:50.761 --rc geninfo_unexecuted_blocks=1 01:09:50.761 01:09:50.761 ' 01:09:50.761 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:51.020 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:09:51.021 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:09:51.021 Cannot find device "nvmf_init_br" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:09:51.021 Cannot find device "nvmf_init_br2" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:09:51.021 Cannot find device "nvmf_tgt_br" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:09:51.021 Cannot find device "nvmf_tgt_br2" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:09:51.021 Cannot find device "nvmf_init_br" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:09:51.021 Cannot find device "nvmf_init_br2" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:09:51.021 Cannot find device "nvmf_tgt_br" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:09:51.021 Cannot find device "nvmf_tgt_br2" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:09:51.021 Cannot find device "nvmf_br" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:09:51.021 Cannot find device "nvmf_init_if" 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 01:09:51.021 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:09:51.280 Cannot find device "nvmf_init_if2" 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:51.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:51.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:51.280 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:09:51.538 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:09:51.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:51.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 01:09:51.539 01:09:51.539 --- 10.0.0.3 ping statistics --- 01:09:51.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:51.539 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:09:51.539 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:09:51.539 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 01:09:51.539 01:09:51.539 --- 10.0.0.4 ping statistics --- 01:09:51.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:51.539 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:51.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:51.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:09:51.539 01:09:51.539 --- 10.0.0.1 ping statistics --- 01:09:51.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:51.539 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:09:51.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:51.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 01:09:51.539 01:09:51.539 --- 10.0.0.2 ping statistics --- 01:09:51.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:51.539 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75434 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75434 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75434 ']' 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:51.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:51.539 06:08:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:51.539 [2024-12-09 06:08:45.999026] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:09:51.539 [2024-12-09 06:08:45.999077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:51.798 [2024-12-09 06:08:46.133773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:51.798 [2024-12-09 06:08:46.188573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:51.798 [2024-12-09 06:08:46.188615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:51.798 [2024-12-09 06:08:46.188624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:51.798 [2024-12-09 06:08:46.188632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:51.798 [2024-12-09 06:08:46.188639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:51.798 [2024-12-09 06:08:46.189000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:09:51.798 [2024-12-09 06:08:46.265334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:52.364 [2024-12-09 06:08:46.919407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:52.364 [2024-12-09 06:08:46.931524] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:52.364 null0 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:52.364 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 01:09:52.365 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:52.365 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:52.623 null1 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75466 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75466 /tmp/host.sock 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75466 ']' 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:52.623 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:52.623 06:08:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:52.623 [2024-12-09 06:08:47.019364] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:09:52.623 [2024-12-09 06:08:47.019416] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75466 ] 01:09:52.623 [2024-12-09 06:08:47.172609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:09:52.881 [2024-12-09 06:08:47.213042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:09:52.881 [2024-12-09 06:08:47.253893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:09:53.450 06:08:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.450 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 01:09:53.450 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 01:09:53.709 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:53.709 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:53.709 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.709 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:53.709 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.710 [2024-12-09 06:08:48.185826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:09:53.710 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 01:09:54.021 06:08:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 01:09:54.588 [2024-12-09 06:08:48.888889] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:09:54.588 [2024-12-09 06:08:48.888919] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:09:54.588 [2024-12-09 06:08:48.888951] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:09:54.588 [2024-12-09 06:08:48.894912] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 01:09:54.588 [2024-12-09 06:08:48.949129] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:09:54.588 [2024-12-09 06:08:48.950011] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x7d1da0:1 started. 01:09:54.588 [2024-12-09 06:08:48.951672] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:09:54.588 [2024-12-09 06:08:48.951695] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:09:54.588 [2024-12-09 06:08:48.957483] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x7d1da0 was disconnected and freed. delete nvme_qpair. 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:09:54.846 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.105 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:55.105 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.105 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:09:55.105 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:09:55.105 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:09:55.106 [2024-12-09 06:08:49.609227] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x7e0190:1 started. 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:55.106 [2024-12-09 06:08:49.616626] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x7e0190 was disconnected and freed. delete nvme_qpair. 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.106 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.365 [2024-12-09 06:08:49.716495] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:09:55.365 [2024-12-09 06:08:49.717373] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:09:55.365 [2024-12-09 06:08:49.717400] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:09:55.365 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:09:55.366 [2024-12-09 06:08:49.723369] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:55.366 [2024-12-09 06:08:49.782130] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 01:09:55.366 [2024-12-09 06:08:49.782170] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:09:55.366 [2024-12-09 06:08:49.782179] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:09:55.366 [2024-12-09 06:08:49.782185] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.366 [2024-12-09 06:08:49.917204] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:09:55.366 [2024-12-09 06:08:49.917229] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:09:55.366 [2024-12-09 06:08:49.917658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:09:55.366 [2024-12-09 06:08:49.917688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:55.366 [2024-12-09 06:08:49.917699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:09:55.366 [2024-12-09 06:08:49.917708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:55.366 [2024-12-09 06:08:49.917718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:09:55.366 [2024-12-09 06:08:49.917726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:55.366 [2024-12-09 06:08:49.917736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:09:55.366 [2024-12-09 06:08:49.917744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:09:55.366 [2024-12-09 06:08:49.917753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7adfb0 is same with the state(6) to be set 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:09:55.366 [2024-12-09 06:08:49.923203] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 01:09:55.366 [2024-12-09 06:08:49.923229] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:09:55.366 [2024-12-09 06:08:49.923268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7adfb0 (9): Bad file descriptor 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:09:55.366 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:09:55.367 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.626 06:08:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.626 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:55.627 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.886 06:08:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:56.823 [2024-12-09 06:08:51.295477] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:09:56.823 [2024-12-09 06:08:51.295500] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:09:56.823 [2024-12-09 06:08:51.295512] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:09:56.823 [2024-12-09 06:08:51.301495] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 01:09:56.823 [2024-12-09 06:08:51.359673] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 01:09:56.823 [2024-12-09 06:08:51.360290] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x7d4c00:1 started. 01:09:56.823 [2024-12-09 06:08:51.362177] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:09:56.823 [2024-12-09 06:08:51.362210] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:09:56.823 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:56.823 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:09:56.823 [2024-12-09 06:08:51.364392] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x7d4c00 was disconnected and freed. delete nvme_qpair. 01:09:56.823 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:09:56.823 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:09:56.823 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:09:56.823 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:09:56.823 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:09:56.823 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:56.824 request: 01:09:56.824 { 01:09:56.824 "name": "nvme", 01:09:56.824 "trtype": "tcp", 01:09:56.824 "traddr": "10.0.0.3", 01:09:56.824 "adrfam": "ipv4", 01:09:56.824 "trsvcid": "8009", 01:09:56.824 "hostnqn": "nqn.2021-12.io.spdk:test", 01:09:56.824 "wait_for_attach": true, 01:09:56.824 "method": "bdev_nvme_start_discovery", 01:09:56.824 "req_id": 1 01:09:56.824 } 01:09:56.824 Got JSON-RPC error response 01:09:56.824 response: 01:09:56.824 { 01:09:56.824 "code": -17, 01:09:56.824 "message": "File exists" 01:09:56.824 } 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:09:56.824 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:57.083 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:57.084 request: 01:09:57.084 { 01:09:57.084 "name": "nvme_second", 01:09:57.084 "trtype": "tcp", 01:09:57.084 "traddr": "10.0.0.3", 01:09:57.084 "adrfam": "ipv4", 01:09:57.084 "trsvcid": "8009", 01:09:57.084 "hostnqn": "nqn.2021-12.io.spdk:test", 01:09:57.084 "wait_for_attach": true, 01:09:57.084 "method": "bdev_nvme_start_discovery", 01:09:57.084 "req_id": 1 01:09:57.084 } 01:09:57.084 Got JSON-RPC error response 01:09:57.084 response: 01:09:57.084 { 01:09:57.084 "code": -17, 01:09:57.084 "message": "File exists" 01:09:57.084 } 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:57.084 06:08:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:09:58.043 [2024-12-09 06:08:52.616576] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:09:58.043 [2024-12-09 06:08:52.616610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d1bb0 with addr=10.0.0.3, port=8010 01:09:58.043 [2024-12-09 06:08:52.616627] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:09:58.043 [2024-12-09 06:08:52.616635] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:09:58.043 [2024-12-09 06:08:52.616643] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 01:09:59.422 [2024-12-09 06:08:53.614946] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:09:59.422 [2024-12-09 06:08:53.614978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e15d0 with addr=10.0.0.3, port=8010 01:09:59.422 [2024-12-09 06:08:53.614993] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:09:59.422 [2024-12-09 06:08:53.615001] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:09:59.422 [2024-12-09 06:08:53.615008] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 01:10:00.361 [2024-12-09 06:08:54.613259] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 01:10:00.361 request: 01:10:00.361 { 01:10:00.361 "name": "nvme_second", 01:10:00.361 "trtype": "tcp", 01:10:00.361 "traddr": "10.0.0.3", 01:10:00.361 "adrfam": "ipv4", 01:10:00.361 "trsvcid": "8010", 01:10:00.361 "hostnqn": "nqn.2021-12.io.spdk:test", 01:10:00.361 "wait_for_attach": false, 01:10:00.361 "attach_timeout_ms": 3000, 01:10:00.361 "method": "bdev_nvme_start_discovery", 01:10:00.361 "req_id": 1 01:10:00.361 } 01:10:00.361 Got JSON-RPC error response 01:10:00.361 response: 01:10:00.361 { 01:10:00.361 "code": -110, 01:10:00.361 "message": "Connection timed out" 01:10:00.361 } 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75466 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:10:00.361 rmmod nvme_tcp 01:10:00.361 rmmod nvme_fabrics 01:10:00.361 rmmod nvme_keyring 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75434 ']' 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75434 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75434 ']' 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75434 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75434 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:10:00.361 killing process with pid 75434 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75434' 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75434 01:10:00.361 06:08:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75434 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:10:00.625 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 01:10:00.890 01:10:00.890 real 0m10.346s 01:10:00.890 user 0m18.027s 01:10:00.890 sys 0m2.807s 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:00.890 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:10:00.890 ************************************ 01:10:00.890 END TEST nvmf_host_discovery 01:10:00.890 ************************************ 01:10:01.150 06:08:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:10:01.150 06:08:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:10:01.150 06:08:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:01.150 06:08:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:10:01.150 ************************************ 01:10:01.150 START TEST nvmf_host_multipath_status 01:10:01.150 ************************************ 01:10:01.150 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:10:01.150 * Looking for test storage... 01:10:01.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:10:01.150 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:10:01.150 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 01:10:01.150 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:10:01.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:01.411 --rc genhtml_branch_coverage=1 01:10:01.411 --rc genhtml_function_coverage=1 01:10:01.411 --rc genhtml_legend=1 01:10:01.411 --rc geninfo_all_blocks=1 01:10:01.411 --rc geninfo_unexecuted_blocks=1 01:10:01.411 01:10:01.411 ' 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:10:01.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:01.411 --rc genhtml_branch_coverage=1 01:10:01.411 --rc genhtml_function_coverage=1 01:10:01.411 --rc genhtml_legend=1 01:10:01.411 --rc geninfo_all_blocks=1 01:10:01.411 --rc geninfo_unexecuted_blocks=1 01:10:01.411 01:10:01.411 ' 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:10:01.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:01.411 --rc genhtml_branch_coverage=1 01:10:01.411 --rc genhtml_function_coverage=1 01:10:01.411 --rc genhtml_legend=1 01:10:01.411 --rc geninfo_all_blocks=1 01:10:01.411 --rc geninfo_unexecuted_blocks=1 01:10:01.411 01:10:01.411 ' 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:10:01.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:01.411 --rc genhtml_branch_coverage=1 01:10:01.411 --rc genhtml_function_coverage=1 01:10:01.411 --rc genhtml_legend=1 01:10:01.411 --rc geninfo_all_blocks=1 01:10:01.411 --rc geninfo_unexecuted_blocks=1 01:10:01.411 01:10:01.411 ' 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.411 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:10:01.412 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:10:01.412 Cannot find device "nvmf_init_br" 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:10:01.412 Cannot find device "nvmf_init_br2" 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:10:01.412 Cannot find device "nvmf_tgt_br" 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:10:01.412 Cannot find device "nvmf_tgt_br2" 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:10:01.412 Cannot find device "nvmf_init_br" 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:10:01.412 Cannot find device "nvmf_init_br2" 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:10:01.412 Cannot find device "nvmf_tgt_br" 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:10:01.412 Cannot find device "nvmf_tgt_br2" 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:10:01.412 Cannot find device "nvmf_br" 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 01:10:01.412 06:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:10:01.672 Cannot find device "nvmf_init_if" 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:10:01.672 Cannot find device "nvmf_init_if2" 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:01.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:01.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:01.672 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:01.932 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:01.932 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:10:01.932 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:10:01.932 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:10:01.932 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:01.932 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:10:01.932 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:10:01.932 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:01.932 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 01:10:01.932 01:10:01.932 --- 10.0.0.3 ping statistics --- 01:10:01.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:01.932 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:10:01.933 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:10:01.933 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.107 ms 01:10:01.933 01:10:01.933 --- 10.0.0.4 ping statistics --- 01:10:01.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:01.933 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:01.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:01.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 01:10:01.933 01:10:01.933 --- 10.0.0.1 ping statistics --- 01:10:01.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:01.933 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:10:01.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:01.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 01:10:01.933 01:10:01.933 --- 10.0.0.2 ping statistics --- 01:10:01.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:01.933 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=75970 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 75970 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 75970 ']' 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:01.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:01.933 06:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:10:01.933 [2024-12-09 06:08:56.392631] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:01.933 [2024-12-09 06:08:56.392689] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:02.192 [2024-12-09 06:08:56.542790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:10:02.192 [2024-12-09 06:08:56.582583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:02.192 [2024-12-09 06:08:56.582624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:02.192 [2024-12-09 06:08:56.582633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:02.192 [2024-12-09 06:08:56.582640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:02.192 [2024-12-09 06:08:56.582647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:02.192 [2024-12-09 06:08:56.583590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:10:02.192 [2024-12-09 06:08:56.583600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:10:02.192 [2024-12-09 06:08:56.625516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:10:02.760 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:02.760 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:10:02.760 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:10:02.760 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:02.760 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:10:02.760 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:02.760 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75970 01:10:02.760 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:10:03.019 [2024-12-09 06:08:57.492206] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:03.019 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:10:03.279 Malloc0 01:10:03.279 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:10:03.539 06:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:10:03.539 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:10:03.798 [2024-12-09 06:08:58.294725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:10:03.798 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:10:04.056 [2024-12-09 06:08:58.491036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76017 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76017 /var/tmp/bdevperf.sock 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76017 ']' 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:04.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:04.056 06:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:10:04.989 06:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:04.989 06:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:10:04.989 06:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:10:05.246 06:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:10:05.504 Nvme0n1 01:10:05.504 06:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:10:05.763 Nvme0n1 01:10:05.763 06:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:10:05.763 06:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 01:10:07.729 06:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 01:10:07.729 06:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:10:07.987 06:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:10:07.987 06:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 01:10:09.364 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 01:10:09.364 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:10:09.364 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:09.364 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:09.364 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:09.364 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:10:09.364 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:09.364 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:09.624 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:09.624 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:09.624 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:09.624 06:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:09.624 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:09.624 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:09.624 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:09.624 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:09.883 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:09.883 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:10:09.883 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:09.883 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:10.142 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:10.142 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:10:10.142 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:10.142 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:10.402 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:10.402 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 01:10:10.402 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:10:10.402 06:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:10:10.667 06:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 01:10:11.603 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 01:10:11.603 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:10:11.603 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:11.603 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:11.863 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:11.863 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:10:11.863 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:11.863 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:12.123 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:12.123 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:12.123 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:12.123 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:12.382 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:12.382 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:12.382 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:12.382 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:12.382 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:12.382 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:10:12.382 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:12.641 06:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:12.641 06:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:12.641 06:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:10:12.641 06:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:12.641 06:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:12.900 06:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:12.900 06:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 01:10:12.900 06:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:10:13.159 06:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 01:10:13.159 06:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 01:10:14.538 06:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 01:10:14.538 06:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:10:14.538 06:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:14.538 06:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:14.538 06:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:14.538 06:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:10:14.538 06:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:14.538 06:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:14.797 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:14.797 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:14.797 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:14.797 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:15.056 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:15.056 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:15.056 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:15.056 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:15.056 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:15.056 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:10:15.056 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:15.056 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:15.316 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:15.316 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:10:15.316 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:15.316 06:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:15.575 06:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:15.575 06:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 01:10:15.575 06:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:10:15.834 06:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:10:15.834 06:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 01:10:17.212 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 01:10:17.212 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:10:17.212 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:17.212 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:17.212 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:17.212 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:10:17.212 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:17.212 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:17.470 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:17.470 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:17.470 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:17.470 06:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:17.470 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:17.470 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:17.470 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:17.470 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:17.728 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:17.728 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:10:17.728 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:17.728 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:17.987 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:17.987 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:10:17.987 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:17.987 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:18.245 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:18.245 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 01:10:18.245 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:10:18.503 06:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:10:18.503 06:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 01:10:19.878 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 01:10:19.878 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:10:19.878 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:19.879 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:19.879 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:19.879 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:10:19.879 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:19.879 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:20.138 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:20.138 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:20.138 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:20.138 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:20.138 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:20.138 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:20.138 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:20.138 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:20.397 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:20.397 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:10:20.397 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:20.397 06:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:20.660 06:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:20.660 06:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:10:20.660 06:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:20.660 06:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:20.918 06:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:20.918 06:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 01:10:20.918 06:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:10:20.918 06:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:10:21.176 06:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 01:10:22.112 06:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 01:10:22.112 06:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:10:22.112 06:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:22.112 06:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:22.371 06:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:22.371 06:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:10:22.371 06:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:22.371 06:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:22.629 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:22.629 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:22.629 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:22.629 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:22.887 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:22.887 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:22.887 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:22.887 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:22.887 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:22.887 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:10:22.887 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:22.887 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:23.146 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:23.146 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:10:23.146 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:23.146 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:23.406 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:23.406 06:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 01:10:23.665 06:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 01:10:23.665 06:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:10:23.665 06:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:10:23.925 06:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 01:10:24.863 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 01:10:24.863 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:10:24.863 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:24.863 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:25.122 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:25.122 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:10:25.122 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:25.123 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:25.382 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:25.382 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:25.382 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:25.382 06:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:25.641 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:25.641 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:25.641 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:25.641 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:25.641 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:25.641 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:10:25.641 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:25.900 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:25.900 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:25.901 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:10:25.901 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:25.901 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:26.160 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:26.160 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 01:10:26.160 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:10:26.419 06:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:10:26.678 06:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 01:10:27.617 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 01:10:27.617 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:10:27.617 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:27.617 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:27.877 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:27.877 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:10:27.877 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:27.877 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:27.877 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:27.877 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:27.877 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:27.877 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:28.137 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:28.137 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:28.137 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:28.137 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:28.396 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:28.396 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:10:28.397 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:28.397 06:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:28.656 06:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:28.656 06:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:10:28.656 06:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:28.656 06:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:28.915 06:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:28.916 06:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 01:10:28.916 06:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:10:28.916 06:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 01:10:29.175 06:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 01:10:30.113 06:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 01:10:30.113 06:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:10:30.113 06:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:30.113 06:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:30.372 06:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:30.372 06:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:10:30.372 06:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:30.372 06:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:30.629 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:30.629 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:30.629 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:30.629 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:30.886 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:30.886 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:30.886 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:30.886 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:31.145 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:31.145 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:10:31.145 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:31.145 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:31.145 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:31.145 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:10:31.145 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:31.145 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:31.404 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:31.404 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 01:10:31.404 06:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:10:31.662 06:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:10:31.921 06:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 01:10:32.858 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 01:10:32.858 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:10:32.858 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:32.858 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:10:33.116 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:33.116 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:10:33.116 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:10:33.116 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:33.374 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:33.374 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:10:33.374 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:10:33.374 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:33.374 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:33.374 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:10:33.374 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:33.374 06:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:10:33.632 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:33.633 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:10:33.633 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:33.633 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:10:33.891 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:10:33.891 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:10:33.891 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:10:33.892 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76017 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76017 ']' 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76017 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76017 01:10:34.152 killing process with pid 76017 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76017' 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76017 01:10:34.152 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76017 01:10:34.152 { 01:10:34.152 "results": [ 01:10:34.152 { 01:10:34.152 "job": "Nvme0n1", 01:10:34.152 "core_mask": "0x4", 01:10:34.152 "workload": "verify", 01:10:34.152 "status": "terminated", 01:10:34.152 "verify_range": { 01:10:34.152 "start": 0, 01:10:34.152 "length": 16384 01:10:34.152 }, 01:10:34.152 "queue_depth": 128, 01:10:34.152 "io_size": 4096, 01:10:34.152 "runtime": 28.381027, 01:10:34.152 "iops": 9061.088592741904, 01:10:34.152 "mibps": 35.39487731539806, 01:10:34.152 "io_failed": 0, 01:10:34.152 "io_timeout": 0, 01:10:34.152 "avg_latency_us": 14104.675497575983, 01:10:34.152 "min_latency_us": 113.50361445783132, 01:10:34.152 "max_latency_us": 3018551.3124497994 01:10:34.152 } 01:10:34.152 ], 01:10:34.152 "core_count": 1 01:10:34.152 } 01:10:34.415 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76017 01:10:34.415 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:10:34.415 [2024-12-09 06:08:58.559648] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:34.415 [2024-12-09 06:08:58.559724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76017 ] 01:10:34.415 [2024-12-09 06:08:58.708951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:34.415 [2024-12-09 06:08:58.749816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:10:34.415 [2024-12-09 06:08:58.792142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:10:34.415 Running I/O for 90 seconds... 01:10:34.415 8305.00 IOPS, 32.44 MiB/s [2024-12-09T06:09:29.002Z] 8331.50 IOPS, 32.54 MiB/s [2024-12-09T06:09:29.002Z] 9020.00 IOPS, 35.23 MiB/s [2024-12-09T06:09:29.002Z] 9357.00 IOPS, 36.55 MiB/s [2024-12-09T06:09:29.002Z] 9582.80 IOPS, 37.43 MiB/s [2024-12-09T06:09:29.002Z] 9796.33 IOPS, 38.27 MiB/s [2024-12-09T06:09:29.002Z] 9935.14 IOPS, 38.81 MiB/s [2024-12-09T06:09:29.002Z] 10012.38 IOPS, 39.11 MiB/s [2024-12-09T06:09:29.002Z] 10025.78 IOPS, 39.16 MiB/s [2024-12-09T06:09:29.002Z] 10041.60 IOPS, 39.23 MiB/s [2024-12-09T06:09:29.002Z] 10054.55 IOPS, 39.28 MiB/s [2024-12-09T06:09:29.002Z] 10064.92 IOPS, 39.32 MiB/s [2024-12-09T06:09:29.002Z] [2024-12-09 06:09:12.852592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.415 [2024-12-09 06:09:12.852660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:10:34.415 [2024-12-09 06:09:12.852712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.415 [2024-12-09 06:09:12.852729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:10:34.415 [2024-12-09 06:09:12.852751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.415 [2024-12-09 06:09:12.852767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:10:34.415 [2024-12-09 06:09:12.852789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.852804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.852825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.852841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.852862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.852877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.852898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.852913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.852934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.852949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.853137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.853202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.853238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.853275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.853311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.853349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.853395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.853433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.853470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.853510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.853550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.853588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.853625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.853674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.853713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.853736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.853752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.416 [2024-12-09 06:09:12.854938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.854976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.854999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.855014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.855037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.855053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.855076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.855101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:10:34.416 [2024-12-09 06:09:12.855125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.416 [2024-12-09 06:09:12.855141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.855888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.855926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.855965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.855987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.856003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.856046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.856094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.856133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.856175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.856214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.856252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.856297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.417 [2024-12-09 06:09:12.856336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.856374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.856414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.856454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.856492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.856531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.856570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.856608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:10:34.417 [2024-12-09 06:09:12.856630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.417 [2024-12-09 06:09:12.856646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.856668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.856684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.856706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.856722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.856750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.856767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.856790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.856805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.856827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.856843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.856866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.856882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.856905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.856920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.856944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.856960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.856986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.857002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.857041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.857081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.857130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.857168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.857207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.857257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.857295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.418 [2024-12-09 06:09:12.857941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.857963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.857979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.858002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.858017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.858039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.858055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.858077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.858103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.858126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.858141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.858164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.418 [2024-12-09 06:09:12.858184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:10:34.418 [2024-12-09 06:09:12.858207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.419 [2024-12-09 06:09:12.858223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:12.858252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.419 [2024-12-09 06:09:12.858268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:12.858290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:12.858305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:12.858328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:12.858344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:12.858367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:12.858382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:12.858405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:12.858420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:12.858443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:12.858459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:12.858482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:12.858497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:12.858519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:12.858535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:12.858558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:12.858574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:10:34.419 9784.23 IOPS, 38.22 MiB/s [2024-12-09T06:09:29.006Z] 9085.36 IOPS, 35.49 MiB/s [2024-12-09T06:09:29.006Z] 8479.67 IOPS, 33.12 MiB/s [2024-12-09T06:09:29.006Z] 8203.12 IOPS, 32.04 MiB/s [2024-12-09T06:09:29.006Z] 8373.76 IOPS, 32.71 MiB/s [2024-12-09T06:09:29.006Z] 8526.78 IOPS, 33.31 MiB/s [2024-12-09T06:09:29.006Z] 8574.37 IOPS, 33.49 MiB/s [2024-12-09T06:09:29.006Z] 8614.75 IOPS, 33.65 MiB/s [2024-12-09T06:09:29.006Z] 8677.29 IOPS, 33.90 MiB/s [2024-12-09T06:09:29.006Z] 8786.50 IOPS, 34.32 MiB/s [2024-12-09T06:09:29.006Z] 8884.83 IOPS, 34.71 MiB/s [2024-12-09T06:09:29.006Z] 8943.79 IOPS, 34.94 MiB/s [2024-12-09T06:09:29.006Z] 8962.20 IOPS, 35.01 MiB/s [2024-12-09T06:09:29.006Z] 8974.12 IOPS, 35.06 MiB/s [2024-12-09T06:09:29.006Z] [2024-12-09 06:09:26.274003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:26.274061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:26.274123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:26.274170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:26.274189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:34.419 [2024-12-09 06:09:26.274203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:26.274221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.419 [2024-12-09 06:09:26.274235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:26.274254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.419 [2024-12-09 06:09:26.274267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:10:34.419 [2024-12-09 06:09:26.274286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:10:34.419 [2024-12-09 06:09:26.274300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:10:34.419 9012.56 IOPS, 35.21 MiB/s [2024-12-09T06:09:29.006Z] 9048.96 IOPS, 35.35 MiB/s [2024-12-09T06:09:29.006Z] Received shutdown signal, test time was about 28.381674 seconds 01:10:34.419 01:10:34.419 Latency(us) 01:10:34.419 [2024-12-09T06:09:29.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:34.419 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:10:34.419 Verification LBA range: start 0x0 length 0x4000 01:10:34.419 Nvme0n1 : 28.38 9061.09 35.39 0.00 0.00 14104.68 113.50 3018551.31 01:10:34.419 [2024-12-09T06:09:29.006Z] =================================================================================================================== 01:10:34.419 [2024-12-09T06:09:29.006Z] Total : 9061.09 35.39 0.00 0.00 14104.68 113.50 3018551.31 01:10:34.419 06:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:10:34.717 rmmod nvme_tcp 01:10:34.717 rmmod nvme_fabrics 01:10:34.717 rmmod nvme_keyring 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 75970 ']' 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 75970 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 75970 ']' 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 75970 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75970 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:10:34.717 killing process with pid 75970 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75970' 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 75970 01:10:34.717 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 75970 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:10:34.987 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 01:10:35.257 01:10:35.257 real 0m34.185s 01:10:35.257 user 1m43.991s 01:10:35.257 sys 0m12.659s 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:10:35.257 ************************************ 01:10:35.257 END TEST nvmf_host_multipath_status 01:10:35.257 ************************************ 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:10:35.257 ************************************ 01:10:35.257 START TEST nvmf_discovery_remove_ifc 01:10:35.257 ************************************ 01:10:35.257 06:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:10:35.516 * Looking for test storage... 01:10:35.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:10:35.516 06:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:10:35.516 06:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 01:10:35.516 06:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:10:35.516 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:10:35.516 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:10:35.516 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:10:35.516 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:10:35.516 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:10:35.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:35.517 --rc genhtml_branch_coverage=1 01:10:35.517 --rc genhtml_function_coverage=1 01:10:35.517 --rc genhtml_legend=1 01:10:35.517 --rc geninfo_all_blocks=1 01:10:35.517 --rc geninfo_unexecuted_blocks=1 01:10:35.517 01:10:35.517 ' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:10:35.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:35.517 --rc genhtml_branch_coverage=1 01:10:35.517 --rc genhtml_function_coverage=1 01:10:35.517 --rc genhtml_legend=1 01:10:35.517 --rc geninfo_all_blocks=1 01:10:35.517 --rc geninfo_unexecuted_blocks=1 01:10:35.517 01:10:35.517 ' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:10:35.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:35.517 --rc genhtml_branch_coverage=1 01:10:35.517 --rc genhtml_function_coverage=1 01:10:35.517 --rc genhtml_legend=1 01:10:35.517 --rc geninfo_all_blocks=1 01:10:35.517 --rc geninfo_unexecuted_blocks=1 01:10:35.517 01:10:35.517 ' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:10:35.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:35.517 --rc genhtml_branch_coverage=1 01:10:35.517 --rc genhtml_function_coverage=1 01:10:35.517 --rc genhtml_legend=1 01:10:35.517 --rc geninfo_all_blocks=1 01:10:35.517 --rc geninfo_unexecuted_blocks=1 01:10:35.517 01:10:35.517 ' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:10:35.517 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:35.517 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:35.518 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:35.518 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:10:35.777 Cannot find device "nvmf_init_br" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:10:35.777 Cannot find device "nvmf_init_br2" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:10:35.777 Cannot find device "nvmf_tgt_br" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:10:35.777 Cannot find device "nvmf_tgt_br2" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:10:35.777 Cannot find device "nvmf_init_br" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:10:35.777 Cannot find device "nvmf_init_br2" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:10:35.777 Cannot find device "nvmf_tgt_br" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:10:35.777 Cannot find device "nvmf_tgt_br2" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:10:35.777 Cannot find device "nvmf_br" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:10:35.777 Cannot find device "nvmf_init_if" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:10:35.777 Cannot find device "nvmf_init_if2" 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:35.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:35.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:35.777 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:10:36.036 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:36.036 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 01:10:36.036 01:10:36.036 --- 10.0.0.3 ping statistics --- 01:10:36.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:36.036 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:10:36.036 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:10:36.036 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 01:10:36.036 01:10:36.036 --- 10.0.0.4 ping statistics --- 01:10:36.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:36.036 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:36.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:36.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 01:10:36.036 01:10:36.036 --- 10.0.0.1 ping statistics --- 01:10:36.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:36.036 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:10:36.036 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:10:36.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:36.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 01:10:36.295 01:10:36.295 --- 10.0.0.2 ping statistics --- 01:10:36.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:36.295 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=76810 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 76810 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 76810 ']' 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:36.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:36.295 06:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:36.295 [2024-12-09 06:09:30.719247] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:36.295 [2024-12-09 06:09:30.719718] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:36.295 [2024-12-09 06:09:30.872268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:36.553 [2024-12-09 06:09:30.911538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:36.553 [2024-12-09 06:09:30.911576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:36.553 [2024-12-09 06:09:30.911586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:36.553 [2024-12-09 06:09:30.911593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:36.553 [2024-12-09 06:09:30.911600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:36.553 [2024-12-09 06:09:30.911847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:10:36.553 [2024-12-09 06:09:30.953747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:10:37.121 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:37.121 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:10:37.121 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:10:37.121 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:37.121 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:37.121 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:37.121 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 01:10:37.121 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:37.121 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:37.121 [2024-12-09 06:09:31.658215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:37.121 [2024-12-09 06:09:31.666325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:10:37.121 null0 01:10:37.121 [2024-12-09 06:09:31.698196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76842 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76842 /tmp/host.sock 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 76842 ']' 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:37.380 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:37.380 06:09:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:37.380 [2024-12-09 06:09:31.778260] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:37.380 [2024-12-09 06:09:31.778324] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76842 ] 01:10:37.380 [2024-12-09 06:09:31.934783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:37.639 [2024-12-09 06:09:31.974720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:10:38.209 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:38.209 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:38.210 [2024-12-09 06:09:32.675963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:38.210 06:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:39.150 [2024-12-09 06:09:33.727596] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:10:39.150 [2024-12-09 06:09:33.727627] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:10:39.150 [2024-12-09 06:09:33.727646] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:10:39.150 [2024-12-09 06:09:33.733627] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 01:10:39.410 [2024-12-09 06:09:33.789027] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:10:39.410 [2024-12-09 06:09:33.789986] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c45f00:1 started. 01:10:39.410 [2024-12-09 06:09:33.791786] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:10:39.410 [2024-12-09 06:09:33.791859] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:10:39.410 [2024-12-09 06:09:33.791882] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:10:39.410 [2024-12-09 06:09:33.791899] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:10:39.410 [2024-12-09 06:09:33.791923] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 01:10:39.410 [2024-12-09 06:09:33.796212] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c45f00 was disconnected and freed. delete nvme_qpair. 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:10:39.410 06:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:10:40.350 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:40.350 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:40.350 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:40.350 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:40.350 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:40.350 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:40.350 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:40.610 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:40.610 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:10:40.610 06:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:10:41.547 06:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:41.547 06:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:41.547 06:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:41.547 06:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:41.547 06:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:41.547 06:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:41.547 06:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:41.547 06:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:41.547 06:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:10:41.547 06:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:10:42.481 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:42.481 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:42.481 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:42.481 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:42.481 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:42.481 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:42.481 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:42.740 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:42.740 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:10:42.740 06:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:10:43.678 06:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:10:44.616 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:44.616 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:44.616 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:44.616 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:44.616 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:44.616 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:44.616 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:44.616 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:44.875 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:10:44.875 06:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:10:44.875 [2024-12-09 06:09:39.210594] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 01:10:44.875 [2024-12-09 06:09:39.210652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:10:44.875 [2024-12-09 06:09:39.210666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:44.875 [2024-12-09 06:09:39.210679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:10:44.875 [2024-12-09 06:09:39.210688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:44.875 [2024-12-09 06:09:39.210698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:10:44.875 [2024-12-09 06:09:39.210706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:44.875 [2024-12-09 06:09:39.210716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:10:44.875 [2024-12-09 06:09:39.210725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:44.875 [2024-12-09 06:09:39.210734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:10:44.875 [2024-12-09 06:09:39.210742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:44.875 [2024-12-09 06:09:39.210751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21fc0 is same with the state(6) to be set 01:10:44.875 [2024-12-09 06:09:39.220573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c21fc0 (9): Bad file descriptor 01:10:44.875 [2024-12-09 06:09:39.230572] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:10:44.875 [2024-12-09 06:09:39.230589] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:10:44.875 [2024-12-09 06:09:39.230595] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:10:44.875 [2024-12-09 06:09:39.230601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:10:44.875 [2024-12-09 06:09:39.230639] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:45.816 [2024-12-09 06:09:40.256263] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 01:10:45.816 [2024-12-09 06:09:40.256391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c21fc0 with addr=10.0.0.3, port=4420 01:10:45.816 [2024-12-09 06:09:40.256439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21fc0 is same with the state(6) to be set 01:10:45.816 [2024-12-09 06:09:40.256525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c21fc0 (9): Bad file descriptor 01:10:45.816 [2024-12-09 06:09:40.257639] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 01:10:45.816 [2024-12-09 06:09:40.257737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:10:45.816 [2024-12-09 06:09:40.257769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:10:45.816 [2024-12-09 06:09:40.257802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:10:45.816 [2024-12-09 06:09:40.257830] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:10:45.816 [2024-12-09 06:09:40.257850] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:10:45.816 [2024-12-09 06:09:40.257867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:10:45.816 [2024-12-09 06:09:40.257898] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:10:45.816 [2024-12-09 06:09:40.257917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:10:45.816 06:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:10:46.756 [2024-12-09 06:09:41.256393] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:10:46.756 [2024-12-09 06:09:41.256434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:10:46.756 [2024-12-09 06:09:41.256476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:10:46.756 [2024-12-09 06:09:41.256487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:10:46.756 [2024-12-09 06:09:41.256499] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 01:10:46.756 [2024-12-09 06:09:41.256510] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:10:46.756 [2024-12-09 06:09:41.256518] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:10:46.756 [2024-12-09 06:09:41.256524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:10:46.756 [2024-12-09 06:09:41.256559] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 01:10:46.756 [2024-12-09 06:09:41.256603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:10:46.756 [2024-12-09 06:09:41.256616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:46.756 [2024-12-09 06:09:41.256631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:10:46.757 [2024-12-09 06:09:41.256640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:46.757 [2024-12-09 06:09:41.256650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:10:46.757 [2024-12-09 06:09:41.256660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:46.757 [2024-12-09 06:09:41.256670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:10:46.757 [2024-12-09 06:09:41.256680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:46.757 [2024-12-09 06:09:41.256690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:10:46.757 [2024-12-09 06:09:41.256699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:46.757 [2024-12-09 06:09:41.256709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 01:10:46.757 [2024-12-09 06:09:41.256969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bada20 (9): Bad file descriptor 01:10:46.757 [2024-12-09 06:09:41.257978] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 01:10:46.757 [2024-12-09 06:09:41.258001] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 01:10:46.757 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:46.757 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:46.757 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:46.757 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:46.757 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:46.757 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:46.757 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:46.757 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:10:47.016 06:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:10:47.955 06:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:10:48.895 [2024-12-09 06:09:43.266447] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:10:48.895 [2024-12-09 06:09:43.266475] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:10:48.895 [2024-12-09 06:09:43.266492] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:10:48.895 [2024-12-09 06:09:43.272473] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 01:10:48.895 [2024-12-09 06:09:43.326683] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 01:10:48.895 [2024-12-09 06:09:43.327348] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1c4e1d0:1 started. 01:10:48.895 [2024-12-09 06:09:43.328421] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:10:48.895 [2024-12-09 06:09:43.328467] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:10:48.895 [2024-12-09 06:09:43.328488] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:10:48.895 [2024-12-09 06:09:43.328506] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 01:10:48.895 [2024-12-09 06:09:43.328515] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:10:48.895 [2024-12-09 06:09:43.334910] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1c4e1d0 was disconnected and freed. delete nvme_qpair. 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76842 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 76842 ']' 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 76842 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76842 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76842' 01:10:49.155 killing process with pid 76842 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 76842 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 76842 01:10:49.155 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:10:49.415 rmmod nvme_tcp 01:10:49.415 rmmod nvme_fabrics 01:10:49.415 rmmod nvme_keyring 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 76810 ']' 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 76810 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 76810 ']' 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 76810 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76810 01:10:49.415 killing process with pid 76810 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76810' 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 76810 01:10:49.415 06:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 76810 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:10:49.675 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 01:10:49.936 01:10:49.936 real 0m14.703s 01:10:49.936 user 0m23.563s 01:10:49.936 sys 0m3.646s 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:49.936 ************************************ 01:10:49.936 END TEST nvmf_discovery_remove_ifc 01:10:49.936 ************************************ 01:10:49.936 06:09:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:10:50.196 06:09:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:10:50.196 06:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:10:50.196 06:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:50.196 06:09:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:10:50.196 ************************************ 01:10:50.196 START TEST nvmf_identify_kernel_target 01:10:50.196 ************************************ 01:10:50.196 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:10:50.196 * Looking for test storage... 01:10:50.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:10:50.196 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:10:50.196 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 01:10:50.196 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:10:50.457 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:10:50.457 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:10:50.457 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:10:50.457 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:10:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:50.458 --rc genhtml_branch_coverage=1 01:10:50.458 --rc genhtml_function_coverage=1 01:10:50.458 --rc genhtml_legend=1 01:10:50.458 --rc geninfo_all_blocks=1 01:10:50.458 --rc geninfo_unexecuted_blocks=1 01:10:50.458 01:10:50.458 ' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:10:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:50.458 --rc genhtml_branch_coverage=1 01:10:50.458 --rc genhtml_function_coverage=1 01:10:50.458 --rc genhtml_legend=1 01:10:50.458 --rc geninfo_all_blocks=1 01:10:50.458 --rc geninfo_unexecuted_blocks=1 01:10:50.458 01:10:50.458 ' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:10:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:50.458 --rc genhtml_branch_coverage=1 01:10:50.458 --rc genhtml_function_coverage=1 01:10:50.458 --rc genhtml_legend=1 01:10:50.458 --rc geninfo_all_blocks=1 01:10:50.458 --rc geninfo_unexecuted_blocks=1 01:10:50.458 01:10:50.458 ' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:10:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:50.458 --rc genhtml_branch_coverage=1 01:10:50.458 --rc genhtml_function_coverage=1 01:10:50.458 --rc genhtml_legend=1 01:10:50.458 --rc geninfo_all_blocks=1 01:10:50.458 --rc geninfo_unexecuted_blocks=1 01:10:50.458 01:10:50.458 ' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:10:50.458 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:10:50.458 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:10:50.459 Cannot find device "nvmf_init_br" 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:10:50.459 Cannot find device "nvmf_init_br2" 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:10:50.459 Cannot find device "nvmf_tgt_br" 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:10:50.459 Cannot find device "nvmf_tgt_br2" 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:10:50.459 Cannot find device "nvmf_init_br" 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 01:10:50.459 06:09:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:10:50.459 Cannot find device "nvmf_init_br2" 01:10:50.459 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 01:10:50.459 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:10:50.459 Cannot find device "nvmf_tgt_br" 01:10:50.459 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 01:10:50.459 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:10:50.717 Cannot find device "nvmf_tgt_br2" 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:10:50.717 Cannot find device "nvmf_br" 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:10:50.717 Cannot find device "nvmf_init_if" 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:10:50.717 Cannot find device "nvmf_init_if2" 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:50.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:50.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:50.717 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:10:50.976 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:50.976 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 01:10:50.976 01:10:50.976 --- 10.0.0.3 ping statistics --- 01:10:50.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:50.976 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:10:50.976 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:10:50.976 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 01:10:50.976 01:10:50.976 --- 10.0.0.4 ping statistics --- 01:10:50.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:50.976 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:50.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:50.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 01:10:50.976 01:10:50.976 --- 10.0.0.1 ping statistics --- 01:10:50.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:50.976 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:10:50.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:50.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 01:10:50.976 01:10:50.976 --- 10.0.0.2 ping statistics --- 01:10:50.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:50.976 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 01:10:50.976 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:10:50.977 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:10:50.977 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:10:50.977 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:10:50.977 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:10:50.977 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:10:50.977 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 01:10:50.977 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:10:50.977 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 01:10:50.977 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:10:51.235 06:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:10:51.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:10:51.803 Waiting for block devices as requested 01:10:51.803 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:10:51.803 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:10:52.062 No valid GPT data, bailing 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:10:52.062 No valid GPT data, bailing 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:10:52.062 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:10:52.063 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:10:52.063 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:10:52.063 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:10:52.063 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:10:52.063 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:10:52.063 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:10:52.063 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:10:52.063 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:10:52.322 No valid GPT data, bailing 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:10:52.322 No valid GPT data, bailing 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 01:10:52.322 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:10:52.323 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid=bac40580-41f0-4da4-8cd9-1be4901a67b8 -a 10.0.0.1 -t tcp -s 4420 01:10:52.323 01:10:52.323 Discovery Log Number of Records 2, Generation counter 2 01:10:52.323 =====Discovery Log Entry 0====== 01:10:52.323 trtype: tcp 01:10:52.323 adrfam: ipv4 01:10:52.323 subtype: current discovery subsystem 01:10:52.323 treq: not specified, sq flow control disable supported 01:10:52.323 portid: 1 01:10:52.323 trsvcid: 4420 01:10:52.323 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:10:52.323 traddr: 10.0.0.1 01:10:52.323 eflags: none 01:10:52.323 sectype: none 01:10:52.323 =====Discovery Log Entry 1====== 01:10:52.323 trtype: tcp 01:10:52.323 adrfam: ipv4 01:10:52.323 subtype: nvme subsystem 01:10:52.323 treq: not specified, sq flow control disable supported 01:10:52.323 portid: 1 01:10:52.323 trsvcid: 4420 01:10:52.323 subnqn: nqn.2016-06.io.spdk:testnqn 01:10:52.323 traddr: 10.0.0.1 01:10:52.323 eflags: none 01:10:52.323 sectype: none 01:10:52.323 06:09:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 01:10:52.323 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 01:10:52.582 ===================================================== 01:10:52.582 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 01:10:52.582 ===================================================== 01:10:52.582 Controller Capabilities/Features 01:10:52.582 ================================ 01:10:52.582 Vendor ID: 0000 01:10:52.582 Subsystem Vendor ID: 0000 01:10:52.582 Serial Number: a7594dc4dcca7e618447 01:10:52.582 Model Number: Linux 01:10:52.582 Firmware Version: 6.8.9-20 01:10:52.582 Recommended Arb Burst: 0 01:10:52.582 IEEE OUI Identifier: 00 00 00 01:10:52.582 Multi-path I/O 01:10:52.582 May have multiple subsystem ports: No 01:10:52.582 May have multiple controllers: No 01:10:52.582 Associated with SR-IOV VF: No 01:10:52.582 Max Data Transfer Size: Unlimited 01:10:52.582 Max Number of Namespaces: 0 01:10:52.582 Max Number of I/O Queues: 1024 01:10:52.582 NVMe Specification Version (VS): 1.3 01:10:52.582 NVMe Specification Version (Identify): 1.3 01:10:52.582 Maximum Queue Entries: 1024 01:10:52.582 Contiguous Queues Required: No 01:10:52.582 Arbitration Mechanisms Supported 01:10:52.582 Weighted Round Robin: Not Supported 01:10:52.582 Vendor Specific: Not Supported 01:10:52.582 Reset Timeout: 7500 ms 01:10:52.582 Doorbell Stride: 4 bytes 01:10:52.582 NVM Subsystem Reset: Not Supported 01:10:52.582 Command Sets Supported 01:10:52.582 NVM Command Set: Supported 01:10:52.582 Boot Partition: Not Supported 01:10:52.582 Memory Page Size Minimum: 4096 bytes 01:10:52.582 Memory Page Size Maximum: 4096 bytes 01:10:52.582 Persistent Memory Region: Not Supported 01:10:52.582 Optional Asynchronous Events Supported 01:10:52.582 Namespace Attribute Notices: Not Supported 01:10:52.582 Firmware Activation Notices: Not Supported 01:10:52.582 ANA Change Notices: Not Supported 01:10:52.582 PLE Aggregate Log Change Notices: Not Supported 01:10:52.582 LBA Status Info Alert Notices: Not Supported 01:10:52.582 EGE Aggregate Log Change Notices: Not Supported 01:10:52.582 Normal NVM Subsystem Shutdown event: Not Supported 01:10:52.582 Zone Descriptor Change Notices: Not Supported 01:10:52.582 Discovery Log Change Notices: Supported 01:10:52.582 Controller Attributes 01:10:52.582 128-bit Host Identifier: Not Supported 01:10:52.582 Non-Operational Permissive Mode: Not Supported 01:10:52.582 NVM Sets: Not Supported 01:10:52.582 Read Recovery Levels: Not Supported 01:10:52.582 Endurance Groups: Not Supported 01:10:52.582 Predictable Latency Mode: Not Supported 01:10:52.582 Traffic Based Keep ALive: Not Supported 01:10:52.582 Namespace Granularity: Not Supported 01:10:52.582 SQ Associations: Not Supported 01:10:52.582 UUID List: Not Supported 01:10:52.582 Multi-Domain Subsystem: Not Supported 01:10:52.582 Fixed Capacity Management: Not Supported 01:10:52.582 Variable Capacity Management: Not Supported 01:10:52.582 Delete Endurance Group: Not Supported 01:10:52.582 Delete NVM Set: Not Supported 01:10:52.582 Extended LBA Formats Supported: Not Supported 01:10:52.582 Flexible Data Placement Supported: Not Supported 01:10:52.582 01:10:52.582 Controller Memory Buffer Support 01:10:52.582 ================================ 01:10:52.582 Supported: No 01:10:52.582 01:10:52.582 Persistent Memory Region Support 01:10:52.582 ================================ 01:10:52.582 Supported: No 01:10:52.582 01:10:52.582 Admin Command Set Attributes 01:10:52.582 ============================ 01:10:52.582 Security Send/Receive: Not Supported 01:10:52.582 Format NVM: Not Supported 01:10:52.582 Firmware Activate/Download: Not Supported 01:10:52.582 Namespace Management: Not Supported 01:10:52.582 Device Self-Test: Not Supported 01:10:52.583 Directives: Not Supported 01:10:52.583 NVMe-MI: Not Supported 01:10:52.583 Virtualization Management: Not Supported 01:10:52.583 Doorbell Buffer Config: Not Supported 01:10:52.583 Get LBA Status Capability: Not Supported 01:10:52.583 Command & Feature Lockdown Capability: Not Supported 01:10:52.583 Abort Command Limit: 1 01:10:52.583 Async Event Request Limit: 1 01:10:52.583 Number of Firmware Slots: N/A 01:10:52.583 Firmware Slot 1 Read-Only: N/A 01:10:52.583 Firmware Activation Without Reset: N/A 01:10:52.583 Multiple Update Detection Support: N/A 01:10:52.583 Firmware Update Granularity: No Information Provided 01:10:52.583 Per-Namespace SMART Log: No 01:10:52.583 Asymmetric Namespace Access Log Page: Not Supported 01:10:52.583 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:10:52.583 Command Effects Log Page: Not Supported 01:10:52.583 Get Log Page Extended Data: Supported 01:10:52.583 Telemetry Log Pages: Not Supported 01:10:52.583 Persistent Event Log Pages: Not Supported 01:10:52.583 Supported Log Pages Log Page: May Support 01:10:52.583 Commands Supported & Effects Log Page: Not Supported 01:10:52.583 Feature Identifiers & Effects Log Page:May Support 01:10:52.583 NVMe-MI Commands & Effects Log Page: May Support 01:10:52.583 Data Area 4 for Telemetry Log: Not Supported 01:10:52.583 Error Log Page Entries Supported: 1 01:10:52.583 Keep Alive: Not Supported 01:10:52.583 01:10:52.583 NVM Command Set Attributes 01:10:52.583 ========================== 01:10:52.583 Submission Queue Entry Size 01:10:52.583 Max: 1 01:10:52.583 Min: 1 01:10:52.583 Completion Queue Entry Size 01:10:52.583 Max: 1 01:10:52.583 Min: 1 01:10:52.583 Number of Namespaces: 0 01:10:52.583 Compare Command: Not Supported 01:10:52.583 Write Uncorrectable Command: Not Supported 01:10:52.583 Dataset Management Command: Not Supported 01:10:52.583 Write Zeroes Command: Not Supported 01:10:52.583 Set Features Save Field: Not Supported 01:10:52.583 Reservations: Not Supported 01:10:52.583 Timestamp: Not Supported 01:10:52.583 Copy: Not Supported 01:10:52.583 Volatile Write Cache: Not Present 01:10:52.583 Atomic Write Unit (Normal): 1 01:10:52.583 Atomic Write Unit (PFail): 1 01:10:52.583 Atomic Compare & Write Unit: 1 01:10:52.583 Fused Compare & Write: Not Supported 01:10:52.583 Scatter-Gather List 01:10:52.583 SGL Command Set: Supported 01:10:52.583 SGL Keyed: Not Supported 01:10:52.583 SGL Bit Bucket Descriptor: Not Supported 01:10:52.583 SGL Metadata Pointer: Not Supported 01:10:52.583 Oversized SGL: Not Supported 01:10:52.583 SGL Metadata Address: Not Supported 01:10:52.583 SGL Offset: Supported 01:10:52.583 Transport SGL Data Block: Not Supported 01:10:52.583 Replay Protected Memory Block: Not Supported 01:10:52.583 01:10:52.583 Firmware Slot Information 01:10:52.583 ========================= 01:10:52.583 Active slot: 0 01:10:52.583 01:10:52.583 01:10:52.583 Error Log 01:10:52.583 ========= 01:10:52.583 01:10:52.583 Active Namespaces 01:10:52.583 ================= 01:10:52.583 Discovery Log Page 01:10:52.583 ================== 01:10:52.583 Generation Counter: 2 01:10:52.583 Number of Records: 2 01:10:52.583 Record Format: 0 01:10:52.583 01:10:52.583 Discovery Log Entry 0 01:10:52.583 ---------------------- 01:10:52.583 Transport Type: 3 (TCP) 01:10:52.583 Address Family: 1 (IPv4) 01:10:52.583 Subsystem Type: 3 (Current Discovery Subsystem) 01:10:52.583 Entry Flags: 01:10:52.583 Duplicate Returned Information: 0 01:10:52.583 Explicit Persistent Connection Support for Discovery: 0 01:10:52.583 Transport Requirements: 01:10:52.583 Secure Channel: Not Specified 01:10:52.583 Port ID: 1 (0x0001) 01:10:52.583 Controller ID: 65535 (0xffff) 01:10:52.583 Admin Max SQ Size: 32 01:10:52.583 Transport Service Identifier: 4420 01:10:52.583 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:10:52.583 Transport Address: 10.0.0.1 01:10:52.583 Discovery Log Entry 1 01:10:52.583 ---------------------- 01:10:52.583 Transport Type: 3 (TCP) 01:10:52.583 Address Family: 1 (IPv4) 01:10:52.583 Subsystem Type: 2 (NVM Subsystem) 01:10:52.583 Entry Flags: 01:10:52.583 Duplicate Returned Information: 0 01:10:52.583 Explicit Persistent Connection Support for Discovery: 0 01:10:52.583 Transport Requirements: 01:10:52.583 Secure Channel: Not Specified 01:10:52.583 Port ID: 1 (0x0001) 01:10:52.583 Controller ID: 65535 (0xffff) 01:10:52.583 Admin Max SQ Size: 32 01:10:52.583 Transport Service Identifier: 4420 01:10:52.583 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 01:10:52.583 Transport Address: 10.0.0.1 01:10:52.583 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:10:52.843 get_feature(0x01) failed 01:10:52.843 get_feature(0x02) failed 01:10:52.843 get_feature(0x04) failed 01:10:52.843 ===================================================== 01:10:52.843 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:10:52.843 ===================================================== 01:10:52.843 Controller Capabilities/Features 01:10:52.843 ================================ 01:10:52.843 Vendor ID: 0000 01:10:52.843 Subsystem Vendor ID: 0000 01:10:52.843 Serial Number: 222b8ec97679da66a0be 01:10:52.843 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 01:10:52.843 Firmware Version: 6.8.9-20 01:10:52.843 Recommended Arb Burst: 6 01:10:52.843 IEEE OUI Identifier: 00 00 00 01:10:52.843 Multi-path I/O 01:10:52.843 May have multiple subsystem ports: Yes 01:10:52.843 May have multiple controllers: Yes 01:10:52.843 Associated with SR-IOV VF: No 01:10:52.843 Max Data Transfer Size: Unlimited 01:10:52.843 Max Number of Namespaces: 1024 01:10:52.843 Max Number of I/O Queues: 128 01:10:52.843 NVMe Specification Version (VS): 1.3 01:10:52.843 NVMe Specification Version (Identify): 1.3 01:10:52.843 Maximum Queue Entries: 1024 01:10:52.843 Contiguous Queues Required: No 01:10:52.843 Arbitration Mechanisms Supported 01:10:52.843 Weighted Round Robin: Not Supported 01:10:52.843 Vendor Specific: Not Supported 01:10:52.843 Reset Timeout: 7500 ms 01:10:52.843 Doorbell Stride: 4 bytes 01:10:52.843 NVM Subsystem Reset: Not Supported 01:10:52.843 Command Sets Supported 01:10:52.843 NVM Command Set: Supported 01:10:52.843 Boot Partition: Not Supported 01:10:52.843 Memory Page Size Minimum: 4096 bytes 01:10:52.843 Memory Page Size Maximum: 4096 bytes 01:10:52.843 Persistent Memory Region: Not Supported 01:10:52.843 Optional Asynchronous Events Supported 01:10:52.843 Namespace Attribute Notices: Supported 01:10:52.843 Firmware Activation Notices: Not Supported 01:10:52.843 ANA Change Notices: Supported 01:10:52.843 PLE Aggregate Log Change Notices: Not Supported 01:10:52.843 LBA Status Info Alert Notices: Not Supported 01:10:52.843 EGE Aggregate Log Change Notices: Not Supported 01:10:52.843 Normal NVM Subsystem Shutdown event: Not Supported 01:10:52.843 Zone Descriptor Change Notices: Not Supported 01:10:52.843 Discovery Log Change Notices: Not Supported 01:10:52.843 Controller Attributes 01:10:52.843 128-bit Host Identifier: Supported 01:10:52.843 Non-Operational Permissive Mode: Not Supported 01:10:52.843 NVM Sets: Not Supported 01:10:52.843 Read Recovery Levels: Not Supported 01:10:52.843 Endurance Groups: Not Supported 01:10:52.843 Predictable Latency Mode: Not Supported 01:10:52.843 Traffic Based Keep ALive: Supported 01:10:52.843 Namespace Granularity: Not Supported 01:10:52.843 SQ Associations: Not Supported 01:10:52.843 UUID List: Not Supported 01:10:52.843 Multi-Domain Subsystem: Not Supported 01:10:52.843 Fixed Capacity Management: Not Supported 01:10:52.843 Variable Capacity Management: Not Supported 01:10:52.843 Delete Endurance Group: Not Supported 01:10:52.843 Delete NVM Set: Not Supported 01:10:52.843 Extended LBA Formats Supported: Not Supported 01:10:52.843 Flexible Data Placement Supported: Not Supported 01:10:52.843 01:10:52.843 Controller Memory Buffer Support 01:10:52.843 ================================ 01:10:52.843 Supported: No 01:10:52.843 01:10:52.843 Persistent Memory Region Support 01:10:52.843 ================================ 01:10:52.843 Supported: No 01:10:52.843 01:10:52.843 Admin Command Set Attributes 01:10:52.843 ============================ 01:10:52.843 Security Send/Receive: Not Supported 01:10:52.843 Format NVM: Not Supported 01:10:52.843 Firmware Activate/Download: Not Supported 01:10:52.843 Namespace Management: Not Supported 01:10:52.843 Device Self-Test: Not Supported 01:10:52.843 Directives: Not Supported 01:10:52.843 NVMe-MI: Not Supported 01:10:52.844 Virtualization Management: Not Supported 01:10:52.844 Doorbell Buffer Config: Not Supported 01:10:52.844 Get LBA Status Capability: Not Supported 01:10:52.844 Command & Feature Lockdown Capability: Not Supported 01:10:52.844 Abort Command Limit: 4 01:10:52.844 Async Event Request Limit: 4 01:10:52.844 Number of Firmware Slots: N/A 01:10:52.844 Firmware Slot 1 Read-Only: N/A 01:10:52.844 Firmware Activation Without Reset: N/A 01:10:52.844 Multiple Update Detection Support: N/A 01:10:52.844 Firmware Update Granularity: No Information Provided 01:10:52.844 Per-Namespace SMART Log: Yes 01:10:52.844 Asymmetric Namespace Access Log Page: Supported 01:10:52.844 ANA Transition Time : 10 sec 01:10:52.844 01:10:52.844 Asymmetric Namespace Access Capabilities 01:10:52.844 ANA Optimized State : Supported 01:10:52.844 ANA Non-Optimized State : Supported 01:10:52.844 ANA Inaccessible State : Supported 01:10:52.844 ANA Persistent Loss State : Supported 01:10:52.844 ANA Change State : Supported 01:10:52.844 ANAGRPID is not changed : No 01:10:52.844 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 01:10:52.844 01:10:52.844 ANA Group Identifier Maximum : 128 01:10:52.844 Number of ANA Group Identifiers : 128 01:10:52.844 Max Number of Allowed Namespaces : 1024 01:10:52.844 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 01:10:52.844 Command Effects Log Page: Supported 01:10:52.844 Get Log Page Extended Data: Supported 01:10:52.844 Telemetry Log Pages: Not Supported 01:10:52.844 Persistent Event Log Pages: Not Supported 01:10:52.844 Supported Log Pages Log Page: May Support 01:10:52.844 Commands Supported & Effects Log Page: Not Supported 01:10:52.844 Feature Identifiers & Effects Log Page:May Support 01:10:52.844 NVMe-MI Commands & Effects Log Page: May Support 01:10:52.844 Data Area 4 for Telemetry Log: Not Supported 01:10:52.844 Error Log Page Entries Supported: 128 01:10:52.844 Keep Alive: Supported 01:10:52.844 Keep Alive Granularity: 1000 ms 01:10:52.844 01:10:52.844 NVM Command Set Attributes 01:10:52.844 ========================== 01:10:52.844 Submission Queue Entry Size 01:10:52.844 Max: 64 01:10:52.844 Min: 64 01:10:52.844 Completion Queue Entry Size 01:10:52.844 Max: 16 01:10:52.844 Min: 16 01:10:52.844 Number of Namespaces: 1024 01:10:52.844 Compare Command: Not Supported 01:10:52.844 Write Uncorrectable Command: Not Supported 01:10:52.844 Dataset Management Command: Supported 01:10:52.844 Write Zeroes Command: Supported 01:10:52.844 Set Features Save Field: Not Supported 01:10:52.844 Reservations: Not Supported 01:10:52.844 Timestamp: Not Supported 01:10:52.844 Copy: Not Supported 01:10:52.844 Volatile Write Cache: Present 01:10:52.844 Atomic Write Unit (Normal): 1 01:10:52.844 Atomic Write Unit (PFail): 1 01:10:52.844 Atomic Compare & Write Unit: 1 01:10:52.844 Fused Compare & Write: Not Supported 01:10:52.844 Scatter-Gather List 01:10:52.844 SGL Command Set: Supported 01:10:52.844 SGL Keyed: Not Supported 01:10:52.844 SGL Bit Bucket Descriptor: Not Supported 01:10:52.844 SGL Metadata Pointer: Not Supported 01:10:52.844 Oversized SGL: Not Supported 01:10:52.844 SGL Metadata Address: Not Supported 01:10:52.844 SGL Offset: Supported 01:10:52.844 Transport SGL Data Block: Not Supported 01:10:52.844 Replay Protected Memory Block: Not Supported 01:10:52.844 01:10:52.844 Firmware Slot Information 01:10:52.844 ========================= 01:10:52.844 Active slot: 0 01:10:52.844 01:10:52.844 Asymmetric Namespace Access 01:10:52.844 =========================== 01:10:52.844 Change Count : 0 01:10:52.844 Number of ANA Group Descriptors : 1 01:10:52.844 ANA Group Descriptor : 0 01:10:52.844 ANA Group ID : 1 01:10:52.844 Number of NSID Values : 1 01:10:52.844 Change Count : 0 01:10:52.844 ANA State : 1 01:10:52.844 Namespace Identifier : 1 01:10:52.844 01:10:52.844 Commands Supported and Effects 01:10:52.844 ============================== 01:10:52.844 Admin Commands 01:10:52.844 -------------- 01:10:52.844 Get Log Page (02h): Supported 01:10:52.844 Identify (06h): Supported 01:10:52.844 Abort (08h): Supported 01:10:52.844 Set Features (09h): Supported 01:10:52.844 Get Features (0Ah): Supported 01:10:52.844 Asynchronous Event Request (0Ch): Supported 01:10:52.844 Keep Alive (18h): Supported 01:10:52.844 I/O Commands 01:10:52.844 ------------ 01:10:52.844 Flush (00h): Supported 01:10:52.844 Write (01h): Supported LBA-Change 01:10:52.844 Read (02h): Supported 01:10:52.844 Write Zeroes (08h): Supported LBA-Change 01:10:52.844 Dataset Management (09h): Supported 01:10:52.844 01:10:52.844 Error Log 01:10:52.844 ========= 01:10:52.844 Entry: 0 01:10:52.844 Error Count: 0x3 01:10:52.844 Submission Queue Id: 0x0 01:10:52.844 Command Id: 0x5 01:10:52.844 Phase Bit: 0 01:10:52.844 Status Code: 0x2 01:10:52.844 Status Code Type: 0x0 01:10:52.844 Do Not Retry: 1 01:10:52.844 Error Location: 0x28 01:10:52.844 LBA: 0x0 01:10:52.844 Namespace: 0x0 01:10:52.844 Vendor Log Page: 0x0 01:10:52.844 ----------- 01:10:52.844 Entry: 1 01:10:52.844 Error Count: 0x2 01:10:52.844 Submission Queue Id: 0x0 01:10:52.844 Command Id: 0x5 01:10:52.844 Phase Bit: 0 01:10:52.844 Status Code: 0x2 01:10:52.844 Status Code Type: 0x0 01:10:52.844 Do Not Retry: 1 01:10:52.844 Error Location: 0x28 01:10:52.844 LBA: 0x0 01:10:52.844 Namespace: 0x0 01:10:52.844 Vendor Log Page: 0x0 01:10:52.844 ----------- 01:10:52.844 Entry: 2 01:10:52.844 Error Count: 0x1 01:10:52.844 Submission Queue Id: 0x0 01:10:52.844 Command Id: 0x4 01:10:52.844 Phase Bit: 0 01:10:52.844 Status Code: 0x2 01:10:52.844 Status Code Type: 0x0 01:10:52.844 Do Not Retry: 1 01:10:52.844 Error Location: 0x28 01:10:52.844 LBA: 0x0 01:10:52.844 Namespace: 0x0 01:10:52.844 Vendor Log Page: 0x0 01:10:52.844 01:10:52.844 Number of Queues 01:10:52.844 ================ 01:10:52.844 Number of I/O Submission Queues: 128 01:10:52.844 Number of I/O Completion Queues: 128 01:10:52.844 01:10:52.844 ZNS Specific Controller Data 01:10:52.844 ============================ 01:10:52.844 Zone Append Size Limit: 0 01:10:52.844 01:10:52.844 01:10:52.844 Active Namespaces 01:10:52.844 ================= 01:10:52.844 get_feature(0x05) failed 01:10:52.844 Namespace ID:1 01:10:52.844 Command Set Identifier: NVM (00h) 01:10:52.844 Deallocate: Supported 01:10:52.844 Deallocated/Unwritten Error: Not Supported 01:10:52.844 Deallocated Read Value: Unknown 01:10:52.844 Deallocate in Write Zeroes: Not Supported 01:10:52.844 Deallocated Guard Field: 0xFFFF 01:10:52.844 Flush: Supported 01:10:52.844 Reservation: Not Supported 01:10:52.844 Namespace Sharing Capabilities: Multiple Controllers 01:10:52.844 Size (in LBAs): 1310720 (5GiB) 01:10:52.844 Capacity (in LBAs): 1310720 (5GiB) 01:10:52.844 Utilization (in LBAs): 1310720 (5GiB) 01:10:52.844 UUID: 09798fec-0f52-43ef-bb34-730dae881f23 01:10:52.844 Thin Provisioning: Not Supported 01:10:52.844 Per-NS Atomic Units: Yes 01:10:52.844 Atomic Boundary Size (Normal): 0 01:10:52.844 Atomic Boundary Size (PFail): 0 01:10:52.844 Atomic Boundary Offset: 0 01:10:52.844 NGUID/EUI64 Never Reused: No 01:10:52.844 ANA group ID: 1 01:10:52.844 Namespace Write Protected: No 01:10:52.844 Number of LBA Formats: 1 01:10:52.844 Current LBA Format: LBA Format #00 01:10:52.844 LBA Format #00: Data Size: 4096 Metadata Size: 0 01:10:52.844 01:10:52.844 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 01:10:52.844 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:10:52.844 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:10:53.103 rmmod nvme_tcp 01:10:53.103 rmmod nvme_fabrics 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:10:53.103 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:10:53.104 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:10:53.104 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:10:53.104 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:10:53.104 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:10:53.104 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:10:53.104 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:10:53.104 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:10:53.104 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:10:53.104 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:10:53.363 06:09:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:10:54.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:10:54.300 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:10:54.559 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:10:54.559 01:10:54.559 real 0m4.425s 01:10:54.559 user 0m1.404s 01:10:54.559 sys 0m2.279s 01:10:54.559 06:09:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:54.559 06:09:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 01:10:54.559 ************************************ 01:10:54.559 END TEST nvmf_identify_kernel_target 01:10:54.559 ************************************ 01:10:54.559 06:09:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:10:54.559 06:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:10:54.559 06:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:54.559 06:09:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:10:54.559 ************************************ 01:10:54.559 START TEST nvmf_auth_host 01:10:54.559 ************************************ 01:10:54.559 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:10:54.819 * Looking for test storage... 01:10:54.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 01:10:54.819 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:10:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:54.820 --rc genhtml_branch_coverage=1 01:10:54.820 --rc genhtml_function_coverage=1 01:10:54.820 --rc genhtml_legend=1 01:10:54.820 --rc geninfo_all_blocks=1 01:10:54.820 --rc geninfo_unexecuted_blocks=1 01:10:54.820 01:10:54.820 ' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:10:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:54.820 --rc genhtml_branch_coverage=1 01:10:54.820 --rc genhtml_function_coverage=1 01:10:54.820 --rc genhtml_legend=1 01:10:54.820 --rc geninfo_all_blocks=1 01:10:54.820 --rc geninfo_unexecuted_blocks=1 01:10:54.820 01:10:54.820 ' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:10:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:54.820 --rc genhtml_branch_coverage=1 01:10:54.820 --rc genhtml_function_coverage=1 01:10:54.820 --rc genhtml_legend=1 01:10:54.820 --rc geninfo_all_blocks=1 01:10:54.820 --rc geninfo_unexecuted_blocks=1 01:10:54.820 01:10:54.820 ' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:10:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:54.820 --rc genhtml_branch_coverage=1 01:10:54.820 --rc genhtml_function_coverage=1 01:10:54.820 --rc genhtml_legend=1 01:10:54.820 --rc geninfo_all_blocks=1 01:10:54.820 --rc geninfo_unexecuted_blocks=1 01:10:54.820 01:10:54.820 ' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:10:54.820 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:54.820 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:54.821 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:54.821 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:54.821 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:54.821 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:54.821 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:10:55.080 Cannot find device "nvmf_init_br" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:10:55.080 Cannot find device "nvmf_init_br2" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:10:55.080 Cannot find device "nvmf_tgt_br" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:10:55.080 Cannot find device "nvmf_tgt_br2" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:10:55.080 Cannot find device "nvmf_init_br" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:10:55.080 Cannot find device "nvmf_init_br2" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:10:55.080 Cannot find device "nvmf_tgt_br" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:10:55.080 Cannot find device "nvmf_tgt_br2" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:10:55.080 Cannot find device "nvmf_br" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:10:55.080 Cannot find device "nvmf_init_if" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:10:55.080 Cannot find device "nvmf_init_if2" 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:55.080 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:55.080 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:10:55.080 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:55.339 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:10:55.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:55.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 01:10:55.599 01:10:55.599 --- 10.0.0.3 ping statistics --- 01:10:55.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:55.599 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:10:55.599 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:10:55.599 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 01:10:55.599 01:10:55.599 --- 10.0.0.4 ping statistics --- 01:10:55.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:55.599 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:55.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:55.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 01:10:55.599 01:10:55.599 --- 10.0.0.1 ping statistics --- 01:10:55.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:55.599 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:10:55.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:55.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 01:10:55.599 01:10:55.599 --- 10.0.0.2 ping statistics --- 01:10:55.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:55.599 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=77854 01:10:55.599 06:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 01:10:55.599 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 77854 01:10:55.599 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 77854 ']' 01:10:55.599 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:55.599 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:55.599 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:55.599 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:55.599 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:10:56.553 06:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:10:56.553 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=33d7e296865075cd37195beb3c83035a 01:10:56.553 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:10:56.553 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pIc 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 33d7e296865075cd37195beb3c83035a 0 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 33d7e296865075cd37195beb3c83035a 0 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=33d7e296865075cd37195beb3c83035a 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pIc 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pIc 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pIc 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e242dcdcf70aa334a9cf41fb99b18d0d00439c1ac9ff46384a9e0a8090c66c84 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pP1 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e242dcdcf70aa334a9cf41fb99b18d0d00439c1ac9ff46384a9e0a8090c66c84 3 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e242dcdcf70aa334a9cf41fb99b18d0d00439c1ac9ff46384a9e0a8090c66c84 3 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e242dcdcf70aa334a9cf41fb99b18d0d00439c1ac9ff46384a9e0a8090c66c84 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:10:56.554 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pP1 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pP1 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.pP1 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=104d668c0da7b4a3fcc6c4af915dd796eb71f3beeeac13ff 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YQk 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 104d668c0da7b4a3fcc6c4af915dd796eb71f3beeeac13ff 0 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 104d668c0da7b4a3fcc6c4af915dd796eb71f3beeeac13ff 0 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=104d668c0da7b4a3fcc6c4af915dd796eb71f3beeeac13ff 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YQk 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YQk 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.YQk 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:10:56.812 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=647f2111819c5b3239efdeae7fc552fbc0fedc78d8139cdd 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.UUA 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 647f2111819c5b3239efdeae7fc552fbc0fedc78d8139cdd 2 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 647f2111819c5b3239efdeae7fc552fbc0fedc78d8139cdd 2 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=647f2111819c5b3239efdeae7fc552fbc0fedc78d8139cdd 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.UUA 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.UUA 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.UUA 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=802f7f7ec88593d34ce3af3e3421bbcb 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fpK 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 802f7f7ec88593d34ce3af3e3421bbcb 1 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 802f7f7ec88593d34ce3af3e3421bbcb 1 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=802f7f7ec88593d34ce3af3e3421bbcb 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fpK 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fpK 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.fpK 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=05579c28fa978a0225f04acc38e1d433 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Itr 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 05579c28fa978a0225f04acc38e1d433 1 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 05579c28fa978a0225f04acc38e1d433 1 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=05579c28fa978a0225f04acc38e1d433 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:10:56.813 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Itr 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Itr 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Itr 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:10:57.072 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e18502f416a3f683d66aef8ca48c1952fc52ac71365b74e4 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rW0 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e18502f416a3f683d66aef8ca48c1952fc52ac71365b74e4 2 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e18502f416a3f683d66aef8ca48c1952fc52ac71365b74e4 2 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e18502f416a3f683d66aef8ca48c1952fc52ac71365b74e4 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rW0 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rW0 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.rW0 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7445cc40af32236f9917979a8ca56b7b 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.brE 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7445cc40af32236f9917979a8ca56b7b 0 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7445cc40af32236f9917979a8ca56b7b 0 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7445cc40af32236f9917979a8ca56b7b 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.brE 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.brE 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.brE 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7cc265cd082a5d05186bfe1fe3551ebdf1efe6cd4c4b1f10089730c62eaf833d 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.v3P 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7cc265cd082a5d05186bfe1fe3551ebdf1efe6cd4c4b1f10089730c62eaf833d 3 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7cc265cd082a5d05186bfe1fe3551ebdf1efe6cd4c4b1f10089730c62eaf833d 3 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7cc265cd082a5d05186bfe1fe3551ebdf1efe6cd4c4b1f10089730c62eaf833d 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:10:57.073 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.v3P 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.v3P 01:10:57.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.v3P 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77854 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 77854 ']' 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pIc 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.pP1 ]] 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pP1 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YQk 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.UUA ]] 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UUA 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.fpK 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:57.332 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Itr ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Itr 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.rW0 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.brE ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.brE 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.v3P 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:10:57.592 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:10:57.593 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 01:10:57.593 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 01:10:57.593 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:10:57.593 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:10:57.593 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:10:57.593 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:10:57.593 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 01:10:57.593 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:10:57.593 06:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 01:10:57.593 06:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:10:57.593 06:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:10:58.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:10:58.162 Waiting for block devices as requested 01:10:58.162 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:10:58.162 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:10:59.107 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:10:59.107 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:10:59.108 No valid GPT data, bailing 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:10:59.108 No valid GPT data, bailing 01:10:59.108 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:10:59.367 No valid GPT data, bailing 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:10:59.367 No valid GPT data, bailing 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid=bac40580-41f0-4da4-8cd9-1be4901a67b8 -a 10.0.0.1 -t tcp -s 4420 01:10:59.367 01:10:59.367 Discovery Log Number of Records 2, Generation counter 2 01:10:59.367 =====Discovery Log Entry 0====== 01:10:59.367 trtype: tcp 01:10:59.367 adrfam: ipv4 01:10:59.367 subtype: current discovery subsystem 01:10:59.367 treq: not specified, sq flow control disable supported 01:10:59.367 portid: 1 01:10:59.367 trsvcid: 4420 01:10:59.367 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:10:59.367 traddr: 10.0.0.1 01:10:59.367 eflags: none 01:10:59.367 sectype: none 01:10:59.367 =====Discovery Log Entry 1====== 01:10:59.367 trtype: tcp 01:10:59.367 adrfam: ipv4 01:10:59.367 subtype: nvme subsystem 01:10:59.367 treq: not specified, sq flow control disable supported 01:10:59.367 portid: 1 01:10:59.367 trsvcid: 4420 01:10:59.367 subnqn: nqn.2024-02.io.spdk:cnode0 01:10:59.367 traddr: 10.0.0.1 01:10:59.367 eflags: none 01:10:59.367 sectype: none 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 01:10:59.367 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:10:59.625 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:10:59.625 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:10:59.625 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:10:59.625 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:10:59.625 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:10:59.625 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:10:59.625 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:10:59.625 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:10:59.625 06:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:10:59.625 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:10:59.625 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:10:59.625 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:10:59.625 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:10:59.625 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 01:10:59.625 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:10:59.625 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:10:59.625 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 01:10:59.625 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:59.626 nvme0n1 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:59.626 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:59.885 nvme0n1 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:59.885 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.145 nvme0n1 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.145 nvme0n1 01:11:00.145 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.405 nvme0n1 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:00.405 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.664 06:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.664 nvme0n1 01:11:00.664 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.664 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:00.664 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.664 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:00.664 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.664 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.664 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:00.664 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:00.664 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:00.665 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.924 nvme0n1 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:00.924 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.186 nvme0n1 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:01.186 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.187 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.482 nvme0n1 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.482 06:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.769 nvme0n1 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:01.769 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.770 nvme0n1 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:01.770 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.341 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:02.602 nvme0n1 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:02.602 06:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.602 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:02.863 nvme0n1 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:02.863 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.124 nvme0n1 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.124 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.384 nvme0n1 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:03.384 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.385 nvme0n1 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:03.385 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.645 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.645 06:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:03.645 06:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.025 nvme0n1 01:11:05.025 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.026 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:05.026 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.026 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:05.026 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:05.284 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.285 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.544 nvme0n1 01:11:05.544 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.544 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:05.544 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.544 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:05.544 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.544 06:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.544 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.545 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.804 nvme0n1 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:05.804 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.064 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.325 nvme0n1 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.325 06:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.585 nvme0n1 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:06.585 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:07.154 nvme0n1 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:07.154 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:07.155 06:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:07.722 nvme0n1 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:07.722 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:07.723 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:07.723 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:07.723 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:07.723 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:07.723 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:07.723 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:07.723 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:08.291 nvme0n1 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:08.291 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.292 06:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:08.860 nvme0n1 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.860 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.430 nvme0n1 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.430 06:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.691 nvme0n1 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.691 nvme0n1 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.691 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.951 nvme0n1 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.951 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.211 nvme0n1 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.211 nvme0n1 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:10.211 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.471 nvme0n1 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:10.471 06:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:10.471 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.472 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.730 nvme0n1 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.730 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.989 nvme0n1 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.989 nvme0n1 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:10.989 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:11.248 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.249 nvme0n1 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.249 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.509 nvme0n1 01:11:11.509 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.509 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:11.509 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:11.509 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.509 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.509 06:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.509 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.769 nvme0n1 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.769 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.029 nvme0n1 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.029 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.030 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.290 nvme0n1 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.290 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.550 nvme0n1 01:11:12.550 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.550 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:12.550 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.550 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:12.550 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.550 06:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.550 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.809 nvme0n1 01:11:12.809 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:12.809 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:12.809 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:12.809 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:12.809 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:12.809 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.069 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.330 nvme0n1 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.330 06:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.590 nvme0n1 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:11:13.590 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.591 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:13.850 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.110 nvme0n1 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:14.110 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:14.111 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:14.111 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:14.111 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:14.111 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:14.111 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:14.111 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:14.111 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:14.111 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.111 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.371 nvme0n1 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.371 06:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.941 nvme0n1 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:14.941 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:15.510 nvme0n1 01:11:15.510 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:15.510 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:15.510 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:15.510 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:15.510 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:15.510 06:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:15.510 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:16.079 nvme0n1 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:16.079 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:16.080 06:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:16.649 nvme0n1 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:16.649 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.218 nvme0n1 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.218 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.478 nvme0n1 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:17.478 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.479 nvme0n1 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.479 06:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.479 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.738 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.739 nvme0n1 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.739 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.999 nvme0n1 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.999 nvme0n1 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:17.999 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.259 nvme0n1 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:18.259 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.260 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.519 nvme0n1 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.519 06:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.519 nvme0n1 01:11:18.519 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:18.778 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.779 nvme0n1 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:18.779 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.038 nvme0n1 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:19.038 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.039 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.298 nvme0n1 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.298 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.557 nvme0n1 01:11:19.557 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.557 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:19.557 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:19.557 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.557 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.557 06:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:19.557 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.558 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.816 nvme0n1 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:19.816 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.074 nvme0n1 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:20.074 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.075 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.333 nvme0n1 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.333 06:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.601 nvme0n1 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:20.601 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:20.602 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.169 nvme0n1 01:11:21.169 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.169 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:21.169 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.169 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:21.169 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.169 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.170 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.429 nvme0n1 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.429 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.430 06:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.689 nvme0n1 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 01:11:21.689 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:21.690 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:22.266 nvme0n1 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNkN2UyOTY4NjUwNzVjZDM3MTk1YmViM2M4MzAzNWE4WijP: 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: ]] 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0MmRjZGNmNzBhYTMzNGE5Y2Y0MWZiOTliMThkMGQwMDQzOWMxYWM5ZmY0NjM4NGE5ZTBhODA5MGM2NmM4NPyDh9g=: 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:22.266 06:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:22.526 nvme0n1 01:11:22.526 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:22.526 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:22.526 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:22.526 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:22.526 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:22.785 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:23.353 nvme0n1 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:23.353 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:23.354 06:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:23.923 nvme0n1 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTE4NTAyZjQxNmEzZjY4M2Q2NmFlZjhjYTQ4YzE5NTJmYzUyYWM3MTM2NWI3NGU0dtSxJQ==: 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: ]] 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ0NWNjNDBhZjMyMjM2Zjk5MTc5NzlhOGNhNTZiN2JZiOa5: 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:23.923 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:24.183 nvme0n1 01:11:24.184 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.184 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:24.184 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.184 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:24.184 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NjMjY1Y2QwODJhNWQwNTE4NmJmZTFmZTM1NTFlYmRmMWVmZTZjZDRjNGIxZjEwMDg5NzMwYzYyZWFmODMzZIWdZMc=: 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:24.444 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:24.445 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:24.445 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:24.445 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:24.445 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:24.445 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:11:24.445 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:24.445 06:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.016 nvme0n1 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.016 request: 01:11:25.016 { 01:11:25.016 "name": "nvme0", 01:11:25.016 "trtype": "tcp", 01:11:25.016 "traddr": "10.0.0.1", 01:11:25.016 "adrfam": "ipv4", 01:11:25.016 "trsvcid": "4420", 01:11:25.016 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:11:25.016 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:11:25.016 "prchk_reftag": false, 01:11:25.016 "prchk_guard": false, 01:11:25.016 "hdgst": false, 01:11:25.016 "ddgst": false, 01:11:25.016 "allow_unrecognized_csi": false, 01:11:25.016 "method": "bdev_nvme_attach_controller", 01:11:25.016 "req_id": 1 01:11:25.016 } 01:11:25.016 Got JSON-RPC error response 01:11:25.016 response: 01:11:25.016 { 01:11:25.016 "code": -5, 01:11:25.016 "message": "Input/output error" 01:11:25.016 } 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.016 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.017 request: 01:11:25.017 { 01:11:25.017 "name": "nvme0", 01:11:25.017 "trtype": "tcp", 01:11:25.017 "traddr": "10.0.0.1", 01:11:25.017 "adrfam": "ipv4", 01:11:25.017 "trsvcid": "4420", 01:11:25.017 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:11:25.017 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:11:25.017 "prchk_reftag": false, 01:11:25.017 "prchk_guard": false, 01:11:25.017 "hdgst": false, 01:11:25.017 "ddgst": false, 01:11:25.017 "dhchap_key": "key2", 01:11:25.017 "allow_unrecognized_csi": false, 01:11:25.017 "method": "bdev_nvme_attach_controller", 01:11:25.017 "req_id": 1 01:11:25.017 } 01:11:25.017 Got JSON-RPC error response 01:11:25.017 response: 01:11:25.017 { 01:11:25.017 "code": -5, 01:11:25.017 "message": "Input/output error" 01:11:25.017 } 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.017 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.277 request: 01:11:25.277 { 01:11:25.277 "name": "nvme0", 01:11:25.277 "trtype": "tcp", 01:11:25.277 "traddr": "10.0.0.1", 01:11:25.277 "adrfam": "ipv4", 01:11:25.277 "trsvcid": "4420", 01:11:25.277 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:11:25.277 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:11:25.277 "prchk_reftag": false, 01:11:25.277 "prchk_guard": false, 01:11:25.277 "hdgst": false, 01:11:25.277 "ddgst": false, 01:11:25.277 "dhchap_key": "key1", 01:11:25.277 "dhchap_ctrlr_key": "ckey2", 01:11:25.277 "allow_unrecognized_csi": false, 01:11:25.277 "method": "bdev_nvme_attach_controller", 01:11:25.277 "req_id": 1 01:11:25.277 } 01:11:25.277 Got JSON-RPC error response 01:11:25.277 response: 01:11:25.277 { 01:11:25.277 "code": -5, 01:11:25.277 "message": "Input/output error" 01:11:25.277 } 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.277 nvme0n1 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:25.277 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.278 request: 01:11:25.278 { 01:11:25.278 "name": "nvme0", 01:11:25.278 "dhchap_key": "key1", 01:11:25.278 "dhchap_ctrlr_key": "ckey2", 01:11:25.278 "method": "bdev_nvme_set_keys", 01:11:25.278 "req_id": 1 01:11:25.278 } 01:11:25.278 Got JSON-RPC error response 01:11:25.278 response: 01:11:25.278 { 01:11:25.278 "code": -13, 01:11:25.278 "message": "Permission denied" 01:11:25.278 } 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:11:25.278 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:25.537 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 01:11:25.537 06:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA0ZDY2OGMwZGE3YjRhM2ZjYzZjNGFmOTE1ZGQ3OTZlYjcxZjNiZWVlYWMxM2Zm6eRBaw==: 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: ]] 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQ3ZjIxMTE4MTljNWIzMjM5ZWZkZWFlN2ZjNTUyZmJjMGZlZGM3OGQ4MTM5Y2Rk5PIUcQ==: 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:26.478 06:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:26.478 nvme0n1 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODAyZjdmN2VjODg1OTNkMzRjZTNhZjNlMzQyMWJiY2J0qMZ4: 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: ]] 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDU1NzljMjhmYTk3OGEwMjI1ZjA0YWNjMzhlMWQ0MzMZgbBE: 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:26.478 request: 01:11:26.478 { 01:11:26.478 "name": "nvme0", 01:11:26.478 "dhchap_key": "key2", 01:11:26.478 "dhchap_ctrlr_key": "ckey1", 01:11:26.478 "method": "bdev_nvme_set_keys", 01:11:26.478 "req_id": 1 01:11:26.478 } 01:11:26.478 Got JSON-RPC error response 01:11:26.478 response: 01:11:26.478 { 01:11:26.478 "code": -13, 01:11:26.478 "message": "Permission denied" 01:11:26.478 } 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:11:26.478 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:11:26.738 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:11:26.738 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:11:26.738 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:26.738 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:26.738 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:26.738 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 01:11:26.738 06:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:11:27.719 rmmod nvme_tcp 01:11:27.719 rmmod nvme_fabrics 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 77854 ']' 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 77854 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 77854 ']' 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 77854 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77854 01:11:27.719 killing process with pid 77854 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77854' 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 77854 01:11:27.719 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 77854 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:11:27.979 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:11:28.240 06:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:11:29.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:11:29.440 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:11:29.440 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:11:29.440 06:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pIc /tmp/spdk.key-null.YQk /tmp/spdk.key-sha256.fpK /tmp/spdk.key-sha384.rW0 /tmp/spdk.key-sha512.v3P /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 01:11:29.440 06:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:11:30.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:11:30.009 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:11:30.009 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:11:30.268 01:11:30.268 real 0m35.502s 01:11:30.268 user 0m32.711s 01:11:30.268 sys 0m5.548s 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:11:30.268 ************************************ 01:11:30.268 END TEST nvmf_auth_host 01:11:30.268 ************************************ 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:11:30.268 ************************************ 01:11:30.268 START TEST nvmf_digest 01:11:30.268 ************************************ 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:11:30.268 * Looking for test storage... 01:11:30.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 01:11:30.268 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:11:30.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:30.527 --rc genhtml_branch_coverage=1 01:11:30.527 --rc genhtml_function_coverage=1 01:11:30.527 --rc genhtml_legend=1 01:11:30.527 --rc geninfo_all_blocks=1 01:11:30.527 --rc geninfo_unexecuted_blocks=1 01:11:30.527 01:11:30.527 ' 01:11:30.527 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:11:30.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:30.527 --rc genhtml_branch_coverage=1 01:11:30.527 --rc genhtml_function_coverage=1 01:11:30.527 --rc genhtml_legend=1 01:11:30.527 --rc geninfo_all_blocks=1 01:11:30.528 --rc geninfo_unexecuted_blocks=1 01:11:30.528 01:11:30.528 ' 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:11:30.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:30.528 --rc genhtml_branch_coverage=1 01:11:30.528 --rc genhtml_function_coverage=1 01:11:30.528 --rc genhtml_legend=1 01:11:30.528 --rc geninfo_all_blocks=1 01:11:30.528 --rc geninfo_unexecuted_blocks=1 01:11:30.528 01:11:30.528 ' 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:11:30.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:30.528 --rc genhtml_branch_coverage=1 01:11:30.528 --rc genhtml_function_coverage=1 01:11:30.528 --rc genhtml_legend=1 01:11:30.528 --rc geninfo_all_blocks=1 01:11:30.528 --rc geninfo_unexecuted_blocks=1 01:11:30.528 01:11:30.528 ' 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:11:30.528 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:11:30.528 06:10:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:11:30.528 Cannot find device "nvmf_init_br" 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:11:30.528 Cannot find device "nvmf_init_br2" 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:11:30.528 Cannot find device "nvmf_tgt_br" 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:11:30.528 Cannot find device "nvmf_tgt_br2" 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:11:30.528 Cannot find device "nvmf_init_br" 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:11:30.528 Cannot find device "nvmf_init_br2" 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 01:11:30.528 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:11:30.786 Cannot find device "nvmf_tgt_br" 01:11:30.786 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 01:11:30.786 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:11:30.786 Cannot find device "nvmf_tgt_br2" 01:11:30.786 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:11:30.787 Cannot find device "nvmf_br" 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:11:30.787 Cannot find device "nvmf_init_if" 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:11:30.787 Cannot find device "nvmf_init_if2" 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:30.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:30.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:11:30.787 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:11:31.045 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:11:31.045 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:11:31.045 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:11:31.045 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:11:31.045 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:11:31.045 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:11:31.045 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:11:31.045 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:11:31.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:11:31.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 01:11:31.046 01:11:31.046 --- 10.0.0.3 ping statistics --- 01:11:31.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:31.046 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:11:31.046 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:11:31.046 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 01:11:31.046 01:11:31.046 --- 10.0.0.4 ping statistics --- 01:11:31.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:31.046 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:11:31.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:11:31.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 01:11:31.046 01:11:31.046 --- 10.0.0.1 ping statistics --- 01:11:31.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:31.046 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:11:31.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:11:31.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 01:11:31.046 01:11:31.046 --- 10.0.0.2 ping statistics --- 01:11:31.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:31.046 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:11:31.046 ************************************ 01:11:31.046 START TEST nvmf_digest_clean 01:11:31.046 ************************************ 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79500 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79500 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79500 ']' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:31.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:31.046 06:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:11:31.305 [2024-12-09 06:10:25.648742] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:31.305 [2024-12-09 06:10:25.648820] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:31.305 [2024-12-09 06:10:25.800795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:31.305 [2024-12-09 06:10:25.839118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:31.305 [2024-12-09 06:10:25.839161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:31.305 [2024-12-09 06:10:25.839170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:31.305 [2024-12-09 06:10:25.839178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:31.305 [2024-12-09 06:10:25.839185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:31.305 [2024-12-09 06:10:25.839437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:11:32.238 [2024-12-09 06:10:26.624353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:11:32.238 null0 01:11:32.238 [2024-12-09 06:10:26.669594] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:32.238 [2024-12-09 06:10:26.693667] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79533 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79533 /var/tmp/bperf.sock 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79533 ']' 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:32.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:32.238 06:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:11:32.238 [2024-12-09 06:10:26.751899] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:32.239 [2024-12-09 06:10:26.751967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79533 ] 01:11:32.496 [2024-12-09 06:10:26.903653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:32.496 [2024-12-09 06:10:26.945027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:33.062 06:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:33.062 06:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:11:33.062 06:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:11:33.062 06:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:11:33.063 06:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:11:33.320 [2024-12-09 06:10:27.869925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:11:33.578 06:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:33.578 06:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:33.836 nvme0n1 01:11:33.836 06:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:11:33.836 06:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:11:33.836 Running I/O for 2 seconds... 01:11:35.710 20320.00 IOPS, 79.38 MiB/s [2024-12-09T06:10:30.297Z] 20383.50 IOPS, 79.62 MiB/s 01:11:35.710 Latency(us) 01:11:35.710 [2024-12-09T06:10:30.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:35.710 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:11:35.710 nvme0n1 : 2.01 20428.54 79.80 0.00 0.00 6262.11 5764.01 19897.68 01:11:35.710 [2024-12-09T06:10:30.297Z] =================================================================================================================== 01:11:35.710 [2024-12-09T06:10:30.297Z] Total : 20428.54 79.80 0.00 0.00 6262.11 5764.01 19897.68 01:11:35.710 { 01:11:35.710 "results": [ 01:11:35.710 { 01:11:35.710 "job": "nvme0n1", 01:11:35.710 "core_mask": "0x2", 01:11:35.710 "workload": "randread", 01:11:35.710 "status": "finished", 01:11:35.710 "queue_depth": 128, 01:11:35.710 "io_size": 4096, 01:11:35.710 "runtime": 2.008073, 01:11:35.710 "iops": 20428.54019749282, 01:11:35.710 "mibps": 79.79898514645633, 01:11:35.710 "io_failed": 0, 01:11:35.710 "io_timeout": 0, 01:11:35.710 "avg_latency_us": 6262.111579270129, 01:11:35.710 "min_latency_us": 5764.0096385542165, 01:11:35.710 "max_latency_us": 19897.677108433734 01:11:35.710 } 01:11:35.710 ], 01:11:35.710 "core_count": 1 01:11:35.710 } 01:11:35.710 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:11:35.710 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:11:35.710 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:11:35.710 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:11:35.710 | select(.opcode=="crc32c") 01:11:35.710 | "\(.module_name) \(.executed)"' 01:11:35.710 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79533 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79533 ']' 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79533 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79533 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:11:35.970 killing process with pid 79533 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79533' 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79533 01:11:35.970 Received shutdown signal, test time was about 2.000000 seconds 01:11:35.970 01:11:35.970 Latency(us) 01:11:35.970 [2024-12-09T06:10:30.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:35.970 [2024-12-09T06:10:30.557Z] =================================================================================================================== 01:11:35.970 [2024-12-09T06:10:30.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:35.970 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79533 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79588 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79588 /var/tmp/bperf.sock 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79588 ']' 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:36.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:36.230 06:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:11:36.489 I/O size of 131072 is greater than zero copy threshold (65536). 01:11:36.489 Zero copy mechanism will not be used. 01:11:36.489 [2024-12-09 06:10:30.851955] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:36.489 [2024-12-09 06:10:30.852038] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79588 ] 01:11:36.489 [2024-12-09 06:10:30.990592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:36.489 [2024-12-09 06:10:31.054140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:37.425 06:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:37.425 06:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:11:37.425 06:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:11:37.425 06:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:11:37.425 06:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:11:37.425 [2024-12-09 06:10:31.991768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:11:37.685 06:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:37.685 06:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:37.944 nvme0n1 01:11:37.944 06:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:11:37.944 06:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:11:37.944 I/O size of 131072 is greater than zero copy threshold (65536). 01:11:37.944 Zero copy mechanism will not be used. 01:11:37.944 Running I/O for 2 seconds... 01:11:39.815 7984.00 IOPS, 998.00 MiB/s [2024-12-09T06:10:34.402Z] 8000.00 IOPS, 1000.00 MiB/s 01:11:39.815 Latency(us) 01:11:39.815 [2024-12-09T06:10:34.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:39.815 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:11:39.815 nvme0n1 : 2.00 8001.89 1000.24 0.00 0.00 1996.93 1895.02 5132.34 01:11:39.815 [2024-12-09T06:10:34.402Z] =================================================================================================================== 01:11:39.815 [2024-12-09T06:10:34.402Z] Total : 8001.89 1000.24 0.00 0.00 1996.93 1895.02 5132.34 01:11:39.815 { 01:11:39.815 "results": [ 01:11:39.815 { 01:11:39.815 "job": "nvme0n1", 01:11:39.815 "core_mask": "0x2", 01:11:39.815 "workload": "randread", 01:11:39.815 "status": "finished", 01:11:39.815 "queue_depth": 16, 01:11:39.815 "io_size": 131072, 01:11:39.815 "runtime": 2.001526, 01:11:39.815 "iops": 8001.894554454951, 01:11:39.815 "mibps": 1000.2368193068688, 01:11:39.815 "io_failed": 0, 01:11:39.815 "io_timeout": 0, 01:11:39.815 "avg_latency_us": 1996.9316723437205, 01:11:39.815 "min_latency_us": 1895.0168674698796, 01:11:39.815 "max_latency_us": 5132.3373493975905 01:11:39.815 } 01:11:39.815 ], 01:11:39.815 "core_count": 1 01:11:39.815 } 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:11:40.074 | select(.opcode=="crc32c") 01:11:40.074 | "\(.module_name) \(.executed)"' 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79588 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79588 ']' 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79588 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:40.074 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79588 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:11:40.332 killing process with pid 79588 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79588' 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79588 01:11:40.332 Received shutdown signal, test time was about 2.000000 seconds 01:11:40.332 01:11:40.332 Latency(us) 01:11:40.332 [2024-12-09T06:10:34.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:40.332 [2024-12-09T06:10:34.919Z] =================================================================================================================== 01:11:40.332 [2024-12-09T06:10:34.919Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79588 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:11:40.332 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:11:40.591 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79648 01:11:40.591 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79648 /var/tmp/bperf.sock 01:11:40.591 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:11:40.591 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79648 ']' 01:11:40.591 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:40.591 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:40.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:40.591 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:40.591 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:40.591 06:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:11:40.591 [2024-12-09 06:10:34.970030] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:40.591 [2024-12-09 06:10:34.970128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79648 ] 01:11:40.591 [2024-12-09 06:10:35.105803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:40.591 [2024-12-09 06:10:35.162077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:41.526 06:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:41.526 06:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:11:41.526 06:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:11:41.526 06:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:11:41.526 06:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:11:41.526 [2024-12-09 06:10:36.067459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:11:41.784 06:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:41.784 06:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:42.042 nvme0n1 01:11:42.043 06:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:11:42.043 06:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:11:42.043 Running I/O for 2 seconds... 01:11:43.910 21845.00 IOPS, 85.33 MiB/s [2024-12-09T06:10:38.497Z] 21844.50 IOPS, 85.33 MiB/s 01:11:43.910 Latency(us) 01:11:43.910 [2024-12-09T06:10:38.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:43.910 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:11:43.910 nvme0n1 : 2.00 21871.13 85.43 0.00 0.00 5848.03 5106.02 11212.18 01:11:43.910 [2024-12-09T06:10:38.497Z] =================================================================================================================== 01:11:43.910 [2024-12-09T06:10:38.497Z] Total : 21871.13 85.43 0.00 0.00 5848.03 5106.02 11212.18 01:11:43.910 { 01:11:43.910 "results": [ 01:11:43.910 { 01:11:43.910 "job": "nvme0n1", 01:11:43.910 "core_mask": "0x2", 01:11:43.910 "workload": "randwrite", 01:11:43.910 "status": "finished", 01:11:43.910 "queue_depth": 128, 01:11:43.910 "io_size": 4096, 01:11:43.910 "runtime": 2.003417, 01:11:43.910 "iops": 21871.133168980796, 01:11:43.911 "mibps": 85.43411394133123, 01:11:43.911 "io_failed": 0, 01:11:43.911 "io_timeout": 0, 01:11:43.911 "avg_latency_us": 5848.03054412231, 01:11:43.911 "min_latency_us": 5106.017670682731, 01:11:43.911 "max_latency_us": 11212.183132530121 01:11:43.911 } 01:11:43.911 ], 01:11:43.911 "core_count": 1 01:11:43.911 } 01:11:43.911 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:11:43.911 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:11:43.911 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:11:43.911 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:11:43.911 | select(.opcode=="crc32c") 01:11:43.911 | "\(.module_name) \(.executed)"' 01:11:43.911 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79648 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79648 ']' 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79648 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79648 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:11:44.169 killing process with pid 79648 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79648' 01:11:44.169 Received shutdown signal, test time was about 2.000000 seconds 01:11:44.169 01:11:44.169 Latency(us) 01:11:44.169 [2024-12-09T06:10:38.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:44.169 [2024-12-09T06:10:38.756Z] =================================================================================================================== 01:11:44.169 [2024-12-09T06:10:38.756Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79648 01:11:44.169 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79648 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79702 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79702 /var/tmp/bperf.sock 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79702 ']' 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:44.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:44.427 06:10:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:11:44.685 I/O size of 131072 is greater than zero copy threshold (65536). 01:11:44.685 Zero copy mechanism will not be used. 01:11:44.685 [2024-12-09 06:10:39.038302] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:44.685 [2024-12-09 06:10:39.038372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79702 ] 01:11:44.685 [2024-12-09 06:10:39.174025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:44.685 [2024-12-09 06:10:39.234915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:45.619 06:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:45.619 06:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:11:45.619 06:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:11:45.619 06:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:11:45.619 06:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:11:45.876 [2024-12-09 06:10:40.204621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:11:45.876 06:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:45.876 06:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:46.135 nvme0n1 01:11:46.135 06:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:11:46.135 06:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:11:46.135 I/O size of 131072 is greater than zero copy threshold (65536). 01:11:46.135 Zero copy mechanism will not be used. 01:11:46.135 Running I/O for 2 seconds... 01:11:48.447 6548.00 IOPS, 818.50 MiB/s [2024-12-09T06:10:43.035Z] 6564.00 IOPS, 820.50 MiB/s 01:11:48.448 Latency(us) 01:11:48.448 [2024-12-09T06:10:43.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:48.448 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:11:48.448 nvme0n1 : 2.00 6561.18 820.15 0.00 0.00 2434.94 1750.26 5000.74 01:11:48.448 [2024-12-09T06:10:43.035Z] =================================================================================================================== 01:11:48.448 [2024-12-09T06:10:43.035Z] Total : 6561.18 820.15 0.00 0.00 2434.94 1750.26 5000.74 01:11:48.448 { 01:11:48.448 "results": [ 01:11:48.448 { 01:11:48.448 "job": "nvme0n1", 01:11:48.448 "core_mask": "0x2", 01:11:48.448 "workload": "randwrite", 01:11:48.448 "status": "finished", 01:11:48.448 "queue_depth": 16, 01:11:48.448 "io_size": 131072, 01:11:48.448 "runtime": 2.003298, 01:11:48.448 "iops": 6561.1806131688845, 01:11:48.448 "mibps": 820.1475766461106, 01:11:48.448 "io_failed": 0, 01:11:48.448 "io_timeout": 0, 01:11:48.448 "avg_latency_us": 2434.936722177816, 01:11:48.448 "min_latency_us": 1750.2586345381526, 01:11:48.448 "max_latency_us": 5000.7389558232935 01:11:48.448 } 01:11:48.448 ], 01:11:48.448 "core_count": 1 01:11:48.448 } 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:11:48.448 | select(.opcode=="crc32c") 01:11:48.448 | "\(.module_name) \(.executed)"' 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79702 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79702 ']' 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79702 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79702 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:11:48.448 killing process with pid 79702 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79702' 01:11:48.448 Received shutdown signal, test time was about 2.000000 seconds 01:11:48.448 01:11:48.448 Latency(us) 01:11:48.448 [2024-12-09T06:10:43.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:48.448 [2024-12-09T06:10:43.035Z] =================================================================================================================== 01:11:48.448 [2024-12-09T06:10:43.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79702 01:11:48.448 06:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79702 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79500 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79500 ']' 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79500 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79500 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:11:48.708 killing process with pid 79500 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79500' 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79500 01:11:48.708 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79500 01:11:48.967 01:11:48.967 real 0m17.781s 01:11:48.967 user 0m32.066s 01:11:48.967 sys 0m6.162s 01:11:48.967 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:48.967 ************************************ 01:11:48.968 END TEST nvmf_digest_clean 01:11:48.968 ************************************ 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:11:48.968 ************************************ 01:11:48.968 START TEST nvmf_digest_error 01:11:48.968 ************************************ 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79792 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79792 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79792 ']' 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:48.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:48.968 06:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:11:48.968 [2024-12-09 06:10:43.526393] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:48.968 [2024-12-09 06:10:43.526454] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:49.228 [2024-12-09 06:10:43.677608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:49.228 [2024-12-09 06:10:43.714518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:49.228 [2024-12-09 06:10:43.714560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:49.228 [2024-12-09 06:10:43.714569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:49.228 [2024-12-09 06:10:43.714577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:49.228 [2024-12-09 06:10:43.714583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:49.228 [2024-12-09 06:10:43.714841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:11:49.797 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:49.797 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:11:49.797 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:11:49.797 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 01:11:49.797 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:50.058 [2024-12-09 06:10:44.434120] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:50.058 [2024-12-09 06:10:44.486958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:11:50.058 null0 01:11:50.058 [2024-12-09 06:10:44.531957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:50.058 [2024-12-09 06:10:44.556036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79824 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79824 /var/tmp/bperf.sock 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79824 ']' 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:50.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:50.058 06:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:50.058 [2024-12-09 06:10:44.614148] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:50.058 [2024-12-09 06:10:44.614204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79824 ] 01:11:50.316 [2024-12-09 06:10:44.765367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:50.316 [2024-12-09 06:10:44.820951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:50.316 [2024-12-09 06:10:44.890866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:11:50.883 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:50.883 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:11:50.883 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:11:50.883 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:11:51.141 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:11:51.141 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:51.141 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:51.141 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:51.141 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:51.141 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:51.399 nvme0n1 01:11:51.399 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:11:51.399 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:51.399 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:51.399 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:51.399 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:11:51.399 06:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:11:51.658 Running I/O for 2 seconds... 01:11:51.658 [2024-12-09 06:10:46.068445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.068493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.068507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.081014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.081052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.081064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.093560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.093594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.093606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.106023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.106055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.106067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.118351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.118382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.118395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.130684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.130715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.130726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.143068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.143109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.143121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.155476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.155506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.155517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.167803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.167834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.167845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.180134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.180163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.180173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.192650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.192680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.192690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.205019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.205050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.205061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.217469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.217499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.217509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.658 [2024-12-09 06:10:46.230350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.658 [2024-12-09 06:10:46.230382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.658 [2024-12-09 06:10:46.230393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.916 [2024-12-09 06:10:46.243360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.916 [2024-12-09 06:10:46.243393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.916 [2024-12-09 06:10:46.243405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.916 [2024-12-09 06:10:46.256442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.916 [2024-12-09 06:10:46.256473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.916 [2024-12-09 06:10:46.256485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.916 [2024-12-09 06:10:46.269179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.916 [2024-12-09 06:10:46.269209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.916 [2024-12-09 06:10:46.269219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.916 [2024-12-09 06:10:46.281770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.916 [2024-12-09 06:10:46.281801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.916 [2024-12-09 06:10:46.281813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.916 [2024-12-09 06:10:46.294324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.916 [2024-12-09 06:10:46.294354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.916 [2024-12-09 06:10:46.294365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.916 [2024-12-09 06:10:46.306949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.916 [2024-12-09 06:10:46.306983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.916 [2024-12-09 06:10:46.306993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.319380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.319411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.319422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.331841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.331873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.331884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.344216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.344246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.344256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.356612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.356643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.356653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.369013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.369045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.369056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.381369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.381399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.381409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.393892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.393924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.393935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.406339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.406371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.406382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.418784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.418815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.418826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.431279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.431309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.431320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.443760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.443790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.443801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.456300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.456330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.456341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.468714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.468744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.468754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.481038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.481068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.481078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:51.917 [2024-12-09 06:10:46.493455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:51.917 [2024-12-09 06:10:46.493485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:51.917 [2024-12-09 06:10:46.493496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.506185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.506216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.506227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.518749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.518779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.518790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.531268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.531298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.531308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.543633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.543663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.543673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.556003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.556034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.556044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.568358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.568388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.568398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.580726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.580756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.580767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.593117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.593146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.593156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.605491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.605523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.605533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.617983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.618015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.618025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.630419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.630449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.630459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.642967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.642996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.643006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.655568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.655597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.655607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.668151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.668179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.668190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.680620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.680649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.680659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.692988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.693018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.693029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.705649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.705680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.705691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.718052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.718084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.718103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.730575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.730604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.730614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.742934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.742963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.742974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.176 [2024-12-09 06:10:46.755480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.176 [2024-12-09 06:10:46.755509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.176 [2024-12-09 06:10:46.755519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.435 [2024-12-09 06:10:46.768165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.435 [2024-12-09 06:10:46.768194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.435 [2024-12-09 06:10:46.768205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.435 [2024-12-09 06:10:46.780624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.435 [2024-12-09 06:10:46.780654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.435 [2024-12-09 06:10:46.780664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.435 [2024-12-09 06:10:46.793024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.435 [2024-12-09 06:10:46.793055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.435 [2024-12-09 06:10:46.793065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.435 [2024-12-09 06:10:46.805658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.435 [2024-12-09 06:10:46.805688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.435 [2024-12-09 06:10:46.805699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.818508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.818540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.818551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.831535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.831566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.831577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.844284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.844313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.844324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.862481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.862511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.862521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.875260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.875290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.875300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.887887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.887918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.887928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.900588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.900618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.900628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.913396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.913425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.913436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.926158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.926188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.926199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.938834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.938877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.938892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.951426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.951455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.951466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.964042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.964073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.964083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.976434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.976463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.976475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:46.988992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:46.989022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:46.989032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:47.001414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:47.001443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:47.001453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.436 [2024-12-09 06:10:47.013861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.436 [2024-12-09 06:10:47.013891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.436 [2024-12-09 06:10:47.013901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.026609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.026639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.026649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.039285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.039313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.039324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 19988.00 IOPS, 78.08 MiB/s [2024-12-09T06:10:47.282Z] [2024-12-09 06:10:47.053248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.053278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.053288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.065728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.065757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.065768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.078250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.078281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.078292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.090913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.090944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.090955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.103388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.103417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.103427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.115740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.115770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.115780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.128128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.128157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.128168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.140521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.140552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.140562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.153069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.153109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.153119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.165509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.165537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.165548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.177836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.177865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.177875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.190191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.190220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.190230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.202633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.202662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.202672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.215004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.215034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.215044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.227398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.227429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.227439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.239811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.239840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.239851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.252579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.252608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.252618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.265392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.265439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.265450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.695 [2024-12-09 06:10:47.278470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.695 [2024-12-09 06:10:47.278501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.695 [2024-12-09 06:10:47.278512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.291499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.291527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.954 [2024-12-09 06:10:47.291538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.304010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.304039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.954 [2024-12-09 06:10:47.304050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.317012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.317044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.954 [2024-12-09 06:10:47.317054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.329394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.329423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.954 [2024-12-09 06:10:47.329434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.341848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.341880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.954 [2024-12-09 06:10:47.341890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.354278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.354307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.954 [2024-12-09 06:10:47.354318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.366628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.366658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.954 [2024-12-09 06:10:47.366668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.379050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.379080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.954 [2024-12-09 06:10:47.379099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.391466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.391494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.954 [2024-12-09 06:10:47.391504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.954 [2024-12-09 06:10:47.403953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.954 [2024-12-09 06:10:47.403982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.403993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.416330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.416358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.416369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.428668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.428697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.428707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.441077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.441118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.441129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.453659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.453690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.453701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.466003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.466033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.466043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.478424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.478452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.478462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.490884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.490913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.490924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.503422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.503451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.503461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.515796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.515825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.515835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:52.955 [2024-12-09 06:10:47.528225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:52.955 [2024-12-09 06:10:47.528254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:52.955 [2024-12-09 06:10:47.528264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.540750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.540781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.540792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.553339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.553376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.553387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.565901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.565932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.565942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.578409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.578438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.578449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.590794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.590822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.590833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.603337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.603367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.603378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.615845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.615874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.615885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.628223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.628251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.628262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.640547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.640576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.640586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.653055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.653095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.653106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.670970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.671001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.671012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.683283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.683313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.683323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.695752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.695783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.695793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.708065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.708105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.708116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.720428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.720458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.732822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.732852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.732863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.745124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.745155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.745166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.757565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.757595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.757606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.769825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.769856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.769866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.782155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.782184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.782194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.214 [2024-12-09 06:10:47.794408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.214 [2024-12-09 06:10:47.794437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.214 [2024-12-09 06:10:47.794448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.472 [2024-12-09 06:10:47.807108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.472 [2024-12-09 06:10:47.807136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.472 [2024-12-09 06:10:47.807146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.472 [2024-12-09 06:10:47.819515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.472 [2024-12-09 06:10:47.819544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.472 [2024-12-09 06:10:47.819554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.472 [2024-12-09 06:10:47.832038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.472 [2024-12-09 06:10:47.832068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.472 [2024-12-09 06:10:47.832079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.472 [2024-12-09 06:10:47.844446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.472 [2024-12-09 06:10:47.844476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.472 [2024-12-09 06:10:47.844487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.472 [2024-12-09 06:10:47.856888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.472 [2024-12-09 06:10:47.856918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.472 [2024-12-09 06:10:47.856929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.472 [2024-12-09 06:10:47.869196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.472 [2024-12-09 06:10:47.869225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.472 [2024-12-09 06:10:47.869235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.472 [2024-12-09 06:10:47.881565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.881595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.881606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:47.893943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.893973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.893984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:47.906383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.906412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.906422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:47.918693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.918722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.918732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:47.931398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.931429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.931439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:47.944186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.944217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.944228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:47.957241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.957271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.957282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:47.970205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.970237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.970248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:47.982982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.983012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.983022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:47.995666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:47.995695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:47.995706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:48.008313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:48.008507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:48.008520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:48.021135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:48.021167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:48.021177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 [2024-12-09 06:10:48.033793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:48.033825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:48.033836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 20114.50 IOPS, 78.57 MiB/s [2024-12-09T06:10:48.060Z] [2024-12-09 06:10:48.047006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f55b50) 01:11:53.473 [2024-12-09 06:10:48.047033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:53.473 [2024-12-09 06:10:48.047044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:11:53.473 01:11:53.473 Latency(us) 01:11:53.473 [2024-12-09T06:10:48.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:53.473 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:11:53.473 nvme0n1 : 2.00 20137.54 78.66 0.00 0.00 6351.80 6053.53 24635.22 01:11:53.473 [2024-12-09T06:10:48.060Z] =================================================================================================================== 01:11:53.473 [2024-12-09T06:10:48.060Z] Total : 20137.54 78.66 0.00 0.00 6351.80 6053.53 24635.22 01:11:53.473 { 01:11:53.473 "results": [ 01:11:53.473 { 01:11:53.473 "job": "nvme0n1", 01:11:53.473 "core_mask": "0x2", 01:11:53.473 "workload": "randread", 01:11:53.473 "status": "finished", 01:11:53.473 "queue_depth": 128, 01:11:53.473 "io_size": 4096, 01:11:53.473 "runtime": 2.004068, 01:11:53.473 "iops": 20137.540243145442, 01:11:53.473 "mibps": 78.66226657478688, 01:11:53.473 "io_failed": 0, 01:11:53.473 "io_timeout": 0, 01:11:53.473 "avg_latency_us": 6351.804818301877, 01:11:53.473 "min_latency_us": 6053.5261044176705, 01:11:53.473 "max_latency_us": 24635.219277108434 01:11:53.473 } 01:11:53.473 ], 01:11:53.473 "core_count": 1 01:11:53.473 } 01:11:53.731 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:11:53.732 | .driver_specific 01:11:53.732 | .nvme_error 01:11:53.732 | .status_code 01:11:53.732 | .command_transient_transport_error' 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79824 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79824 ']' 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79824 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:53.732 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79824 01:11:53.998 killing process with pid 79824 01:11:53.998 Received shutdown signal, test time was about 2.000000 seconds 01:11:53.998 01:11:53.998 Latency(us) 01:11:53.998 [2024-12-09T06:10:48.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:53.998 [2024-12-09T06:10:48.585Z] =================================================================================================================== 01:11:53.998 [2024-12-09T06:10:48.585Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79824' 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79824 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79824 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79880 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79880 /var/tmp/bperf.sock 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79880 ']' 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 01:11:53.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:53.998 06:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:54.256 [2024-12-09 06:10:48.626179] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:54.256 [2024-12-09 06:10:48.626431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 01:11:54.256 Zero copy mechanism will not be used. 01:11:54.256 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79880 ] 01:11:54.256 [2024-12-09 06:10:48.780042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:54.256 [2024-12-09 06:10:48.835964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:54.514 [2024-12-09 06:10:48.906336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:11:55.136 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:55.136 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:11:55.136 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:11:55.136 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:11:55.136 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:11:55.136 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:55.136 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:55.426 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:55.426 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:55.426 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:55.426 nvme0n1 01:11:55.426 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:11:55.426 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:55.426 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:55.426 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:55.426 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:11:55.426 06:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:11:55.686 I/O size of 131072 is greater than zero copy threshold (65536). 01:11:55.686 Zero copy mechanism will not be used. 01:11:55.686 Running I/O for 2 seconds... 01:11:55.686 [2024-12-09 06:10:50.080011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.686 [2024-12-09 06:10:50.080242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.686 [2024-12-09 06:10:50.080354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.686 [2024-12-09 06:10:50.084579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.686 [2024-12-09 06:10:50.084759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.686 [2024-12-09 06:10:50.084853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.686 [2024-12-09 06:10:50.089058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.686 [2024-12-09 06:10:50.089246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.686 [2024-12-09 06:10:50.089343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.686 [2024-12-09 06:10:50.093467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.686 [2024-12-09 06:10:50.093616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.686 [2024-12-09 06:10:50.093713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.686 [2024-12-09 06:10:50.097887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.686 [2024-12-09 06:10:50.097924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.686 [2024-12-09 06:10:50.097935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.686 [2024-12-09 06:10:50.102033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.686 [2024-12-09 06:10:50.102069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.686 [2024-12-09 06:10:50.102080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.686 [2024-12-09 06:10:50.106233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.686 [2024-12-09 06:10:50.106268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.686 [2024-12-09 06:10:50.106280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.686 [2024-12-09 06:10:50.110381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.686 [2024-12-09 06:10:50.110416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.686 [2024-12-09 06:10:50.110427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.686 [2024-12-09 06:10:50.114535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.686 [2024-12-09 06:10:50.114568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.114578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.118632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.118665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.118675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.122773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.122913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.122926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.127075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.127123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.127146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.131219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.131251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.131262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.135377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.135409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.135420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.139544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.139577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.139587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.143661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.143693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.143703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.147832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.147864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.147875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.152049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.152080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.152104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.156155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.156186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.156196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.160262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.160294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.160304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.164361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.164393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.164404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.168475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.168506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.168516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.172592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.172623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.172633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.176722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.176754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.176764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.180854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.180995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.181008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.185149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.185181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.185192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.189322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.189363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.189374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.193472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.193507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.193518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.197698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.197731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.197742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.201901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.201935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.201945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.206080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.206126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.206138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.210380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.210413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.210424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.214601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.214636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.214647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.218885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.218919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.218930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.223049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.687 [2024-12-09 06:10:50.223080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.687 [2024-12-09 06:10:50.223112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.687 [2024-12-09 06:10:50.227217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.227248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.227259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.231348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.231380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.231391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.235495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.235527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.235538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.239634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.239666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.239676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.243787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.243819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.243829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.247915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.247948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.247958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.252098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.252127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.252138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.256251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.256282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.256293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.260385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.260416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.260427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.264564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.264596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.264607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.688 [2024-12-09 06:10:50.268747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.688 [2024-12-09 06:10:50.268780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.688 [2024-12-09 06:10:50.268791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.948 [2024-12-09 06:10:50.273062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.948 [2024-12-09 06:10:50.273104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.948 [2024-12-09 06:10:50.273117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.948 [2024-12-09 06:10:50.277297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.948 [2024-12-09 06:10:50.277330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.948 [2024-12-09 06:10:50.277340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.948 [2024-12-09 06:10:50.281522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.948 [2024-12-09 06:10:50.281556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.948 [2024-12-09 06:10:50.281568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.948 [2024-12-09 06:10:50.285689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.948 [2024-12-09 06:10:50.285738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.948 [2024-12-09 06:10:50.285749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.948 [2024-12-09 06:10:50.289892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.948 [2024-12-09 06:10:50.289927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.948 [2024-12-09 06:10:50.289938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.948 [2024-12-09 06:10:50.294029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.294063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.294074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.298228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.298261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.298272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.302425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.302458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.302469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.306604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.306648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.306658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.310770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.310802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.310813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.314926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.314959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.314970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.319136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.319167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.319178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.323286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.323318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.323328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.327409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.327440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.327451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.331458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.331490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.331500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.335622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.335654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.335664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.339835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.339866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.339877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.344115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.344147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.344157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.348340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.348374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.348385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.352561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.352594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.352604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.356722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.356755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.356766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.360987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.361023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.361034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.365191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.365225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.365236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.369397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.369432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.369443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.373560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.373593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.373604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.949 [2024-12-09 06:10:50.377779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.949 [2024-12-09 06:10:50.377813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.949 [2024-12-09 06:10:50.377824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.381940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.381973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.381984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.386138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.386169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.386179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.390342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.390374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.390385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.394461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.394493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.394503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.398574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.398607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.398617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.402924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.402960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.402970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.407106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.407138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.407148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.411271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.411302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.411313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.415391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.415423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.415433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.419486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.419517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.419528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.423571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.423602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.423612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.427673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.427705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.427715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.431845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.431877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.431887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.436038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.436070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.436080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.440164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.440194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.440204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.444296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.444436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.444450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.448603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.448636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.448647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.452708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.452741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.452751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.456855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.456888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.456899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.461025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.950 [2024-12-09 06:10:50.461057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.950 [2024-12-09 06:10:50.461068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.950 [2024-12-09 06:10:50.465115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.465144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.465155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.469217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.469248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.469258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.473411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.473443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.473454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.477563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.477595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.477606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.481782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.481815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.481826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.485904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.485935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.485946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.490037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.490071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.490082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.494198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.494231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.494241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.498377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.498410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.498421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.502501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.502532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.502543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.506684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.506716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.506726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.510859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.510891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.510902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.515041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.515194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.515207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.519327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.519359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.519370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.523456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.523488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.523499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:55.951 [2024-12-09 06:10:50.527636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:55.951 [2024-12-09 06:10:50.527668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:55.951 [2024-12-09 06:10:50.527679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.211 [2024-12-09 06:10:50.531901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.211 [2024-12-09 06:10:50.531933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.211 [2024-12-09 06:10:50.531944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.211 [2024-12-09 06:10:50.536132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.211 [2024-12-09 06:10:50.536162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.211 [2024-12-09 06:10:50.536173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.211 [2024-12-09 06:10:50.540220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.211 [2024-12-09 06:10:50.540252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.211 [2024-12-09 06:10:50.540263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.211 [2024-12-09 06:10:50.544383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.211 [2024-12-09 06:10:50.544415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.211 [2024-12-09 06:10:50.544425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.211 [2024-12-09 06:10:50.548508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.211 [2024-12-09 06:10:50.548540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.211 [2024-12-09 06:10:50.548551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.211 [2024-12-09 06:10:50.552667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.211 [2024-12-09 06:10:50.552701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.552711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.556874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.556907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.556918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.561007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.561038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.561049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.565149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.565179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.565189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.569299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.569330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.569340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.573496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.573527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.573538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.577570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.577602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.577612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.581795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.581826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.581837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.585922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.585953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.585964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.590019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.590199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.590213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.594345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.594379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.594390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.598445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.598477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.598487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.602515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.602547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.602557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.606720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.606750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.606761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.610901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.610934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.610945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.615129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.615159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.615169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.619278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.619309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.619319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.623373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.623404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.623414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.627446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.627478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.627488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.631546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.631577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.212 [2024-12-09 06:10:50.631588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.212 [2024-12-09 06:10:50.635677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.212 [2024-12-09 06:10:50.635708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.635719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.639828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.639860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.639870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.643973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.644004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.644015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.648142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.648172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.648183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.652256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.652288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.652298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.656500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.656532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.656543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.660622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.660654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.660665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.664784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.664817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.664827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.668913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.668945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.668955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.673125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.673154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.673165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.677264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.677296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.677306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.681441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.681473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.681484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.685611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.685642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.685653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.689819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.689850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.689860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.693969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.694001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.694012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.698108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.698137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.698147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.702183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.702215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.702225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.706284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.706316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.706327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.710406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.710437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.710447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.714495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.714526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.714536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.718583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.718615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.213 [2024-12-09 06:10:50.718625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.213 [2024-12-09 06:10:50.722624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.213 [2024-12-09 06:10:50.722656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.722666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.726710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.726742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.726752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.730961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.730992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.731003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.735191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.735222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.735232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.739334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.739366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.739376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.743459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.743491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.743502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.747560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.747592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.747603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.751733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.751765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.751775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.755892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.755924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.755934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.760008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.760043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.760053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.764113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.764143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.764153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.768183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.768214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.768225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.772251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.772284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.772294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.776350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.776382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.776393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.780434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.780466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.780477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.784510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.784541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.784552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.788548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.788580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.788590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.214 [2024-12-09 06:10:50.792603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.214 [2024-12-09 06:10:50.792636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.214 [2024-12-09 06:10:50.792647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.796763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.796796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.796807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.800910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.800942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.800952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.805017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.805049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.805060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.809142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.809172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.809182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.813193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.813224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.813234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.817362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.817410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.817421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.821522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.821554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.821565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.825650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.825683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.825693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.829744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.829777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.829788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.476 [2024-12-09 06:10:50.833914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.476 [2024-12-09 06:10:50.833945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.476 [2024-12-09 06:10:50.833955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.838145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.838176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.838186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.842316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.842347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.842357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.846495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.846527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.846537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.850603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.850635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.850646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.854767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.854799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.854809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.858934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.859071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.859084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.863168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.863203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.863214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.867278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.867311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.867322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.871463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.871495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.871506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.875587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.875619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.875629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.879702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.879734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.879745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.883873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.883904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.883915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.887972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.888004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.888014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.892113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.892143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.892153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.896210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.896241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.896251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.900342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.900374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.900384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.904475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.904508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.904518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.908583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.908614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.908624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.912730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.912762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.912773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.916879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.916911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.477 [2024-12-09 06:10:50.916921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.477 [2024-12-09 06:10:50.920983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.477 [2024-12-09 06:10:50.921015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.921025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.925073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.925117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.925127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.929147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.929179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.929189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.933251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.933283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.933293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.937418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.937452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.937464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.941604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.941637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.941647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.945797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.945830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.945841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.950021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.950054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.950065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.954227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.954259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.954270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.958354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.958387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.958398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.962466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.962498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.962510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.966626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.966659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.966670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.970773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.970805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.970815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.974975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.975006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.975016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.979130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.979160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.979170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.983216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.983247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.983257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.987384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.987415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.987426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.991508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.991539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.991550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.995616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.995647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.995658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:50.999729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:50.999761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:50.999771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.478 [2024-12-09 06:10:51.003820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.478 [2024-12-09 06:10:51.003852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.478 [2024-12-09 06:10:51.003862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.007984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.008016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.008027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.012121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.012151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.012162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.016242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.016275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.016285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.020331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.020363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.020374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.024436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.024469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.024479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.028583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.028614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.028625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.032649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.032680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.032691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.036751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.036783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.036793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.040848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.040880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.040890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.044922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.044955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.044965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.049024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.049055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.049066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.053138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.053168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.053178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.479 [2024-12-09 06:10:51.057334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.479 [2024-12-09 06:10:51.057483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.479 [2024-12-09 06:10:51.057498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.061684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.061721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.061732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.065862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.065897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.065907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.070020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.070054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.070065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.741 7393.00 IOPS, 924.12 MiB/s [2024-12-09T06:10:51.328Z] [2024-12-09 06:10:51.075507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.075542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.075553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.079651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.079798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.079812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.083917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.083951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.083962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.088107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.088139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.088149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.092211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.092243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.092253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.096390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.096422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.096433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.100468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.100501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.100511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.104526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.104558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.104568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.108576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.108609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.108619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.112669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.112701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.112711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.116827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.116859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.116870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.120925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.120957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.120968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.124996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.741 [2024-12-09 06:10:51.125028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.741 [2024-12-09 06:10:51.125038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.741 [2024-12-09 06:10:51.129071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.129111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.129121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.133143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.133172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.133183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.137301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.137333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.137350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.141524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.141574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.141585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.145684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.145716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.145726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.149806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.149838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.149849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.153924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.154076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.154105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.158289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.158324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.158335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.162422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.162454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.162465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.166517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.166551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.166561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.170633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.170666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.170676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.174853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.174886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.174896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.179032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.179065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.179075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.183166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.183196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.183207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.187229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.187259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.187270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.191300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.191332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.191342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.195397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.195428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.195439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.199521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.199553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.199563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.203591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.203624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.203634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.207718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.207750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.742 [2024-12-09 06:10:51.207761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.742 [2024-12-09 06:10:51.211805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.742 [2024-12-09 06:10:51.211837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.211848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.215908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.215940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.215951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.220047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.220079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.220101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.224151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.224182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.224192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.228270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.228301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.228312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.232390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.232423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.232434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.236493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.236526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.236536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.240535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.240566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.240576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.244573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.244606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.244616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.248617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.248651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.248661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.252735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.252767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.252778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.256910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.256941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.256952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.261051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.261083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.261104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.265153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.265183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.265194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.269221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.269253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.269263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.273389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.273422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.273432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.277538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.277569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.277580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.281657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.281688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.281699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.285774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.285805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.285816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.289925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.289956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.289967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.294063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.743 [2024-12-09 06:10:51.294107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.743 [2024-12-09 06:10:51.294118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.743 [2024-12-09 06:10:51.298154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.744 [2024-12-09 06:10:51.298184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.744 [2024-12-09 06:10:51.298195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.744 [2024-12-09 06:10:51.302239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.744 [2024-12-09 06:10:51.302270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.744 [2024-12-09 06:10:51.302280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.744 [2024-12-09 06:10:51.306378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.744 [2024-12-09 06:10:51.306410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.744 [2024-12-09 06:10:51.306420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:56.744 [2024-12-09 06:10:51.310573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.744 [2024-12-09 06:10:51.310604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.744 [2024-12-09 06:10:51.310614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:56.744 [2024-12-09 06:10:51.314703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.744 [2024-12-09 06:10:51.314733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.744 [2024-12-09 06:10:51.314743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:56.744 [2024-12-09 06:10:51.318896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.744 [2024-12-09 06:10:51.318928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.744 [2024-12-09 06:10:51.318938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:56.744 [2024-12-09 06:10:51.323079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:56.744 [2024-12-09 06:10:51.323119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:56.744 [2024-12-09 06:10:51.323131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.327305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.327337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.327347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.331520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.331552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.331563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.335718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.335752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.335763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.339977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.340009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.340020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.344208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.344241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.344252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.348427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.348460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.348471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.352677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.352709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.352720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.356883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.356914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.356924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.361107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.361138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.361149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.365261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.365294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.365305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.369442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.369471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.369482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.373638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.373668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.373679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.377860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.377890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.005 [2024-12-09 06:10:51.377902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.005 [2024-12-09 06:10:51.382047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.005 [2024-12-09 06:10:51.382076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.382099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.386225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.386255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.386265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.390504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.390533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.390543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.394684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.394712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.394723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.398840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.398867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.398878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.403014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.403043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.403054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.407179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.407202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.407213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.411371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.411398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.411409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.415540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.415568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.415578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.419733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.419762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.419772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.423904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.423933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.423943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.428031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.428059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.428069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.432248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.432278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.432289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.436416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.436445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.436455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.440612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.440639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.440648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.444738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.444767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.444776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.448907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.448935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.448945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.453083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.453120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.453130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.457255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.457283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.457293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.461423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.461452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.461463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.465564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.006 [2024-12-09 06:10:51.465593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.006 [2024-12-09 06:10:51.465603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.006 [2024-12-09 06:10:51.469782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.469812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.469823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.473939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.473969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.473979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.478092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.478134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.478145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.482383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.482411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.482422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.486662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.486690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.486700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.490870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.490898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.490909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.495089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.495126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.495136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.499207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.499234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.499244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.503419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.503447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.503458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.507539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.507566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.507577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.511755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.511782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.511792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.515886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.515914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.515924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.519964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.519991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.520002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.524002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.524030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.524040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.528168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.528195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.528205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.532257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.532284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.532294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.536323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.536350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.536360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.540404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.540431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.540442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.544523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.544551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.544561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.548574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.548601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.007 [2024-12-09 06:10:51.548611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.007 [2024-12-09 06:10:51.552689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.007 [2024-12-09 06:10:51.552717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.008 [2024-12-09 06:10:51.552728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.008 [2024-12-09 06:10:51.556807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.008 [2024-12-09 06:10:51.556835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.008 [2024-12-09 06:10:51.556845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.008 [2024-12-09 06:10:51.560958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.008 [2024-12-09 06:10:51.560986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.008 [2024-12-09 06:10:51.560996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.008 [2024-12-09 06:10:51.565053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.008 [2024-12-09 06:10:51.565081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.008 [2024-12-09 06:10:51.565109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.008 [2024-12-09 06:10:51.569160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.008 [2024-12-09 06:10:51.569187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.008 [2024-12-09 06:10:51.569197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.008 [2024-12-09 06:10:51.573303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.008 [2024-12-09 06:10:51.573330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.008 [2024-12-09 06:10:51.573340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.008 [2024-12-09 06:10:51.577410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.008 [2024-12-09 06:10:51.577437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.008 [2024-12-09 06:10:51.577447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.008 [2024-12-09 06:10:51.581517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.008 [2024-12-09 06:10:51.581545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.008 [2024-12-09 06:10:51.581555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.008 [2024-12-09 06:10:51.585706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.008 [2024-12-09 06:10:51.585735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.008 [2024-12-09 06:10:51.585745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.589857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.589886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.589897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.594085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.594122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.594133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.598307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.598336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.598347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.602507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.602536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.602546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.606700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.606728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.606738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.610929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.610957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.610967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.615169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.615197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.615207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.619298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.619326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.619336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.623433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.623462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.623472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.627560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.627588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.269 [2024-12-09 06:10:51.627598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.269 [2024-12-09 06:10:51.631781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.269 [2024-12-09 06:10:51.631809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.631819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.635959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.635988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.635998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.640060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.640097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.640107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.644196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.644226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.644236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.648347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.648376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.648387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.652503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.652532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.652542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.656600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.656627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.656638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.660733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.660761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.660771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.664890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.664918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.664928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.668984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.669011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.669021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.673156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.673183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.673193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.677246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.677273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.677283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.681296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.681324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.681334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.685490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.685519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.685530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.689634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.689662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.689672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.693736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.693781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.693792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.697916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.697949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.697959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.702059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.702099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.702110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.706184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.706212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.706222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.710260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.710288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.710298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.714364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.714392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.270 [2024-12-09 06:10:51.714402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.270 [2024-12-09 06:10:51.718381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.270 [2024-12-09 06:10:51.718409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.718418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.722538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.722565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.722576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.726647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.726676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.726685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.730896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.730923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.730933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.735069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.735106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.735117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.739173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.739199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.739209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.743323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.743349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.743359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.747460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.747486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.747496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.751565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.751593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.751603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.755766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.755793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.755803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.759865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.759893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.759903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.764003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.764031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.764041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.768103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.768129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.768139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.772248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.772277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.772287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.776345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.776375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.776385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.780449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.780477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.780488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.784516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.784543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.784553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.788624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.788652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.788662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.792702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.792730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.792740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.796793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.796821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.796831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.800927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.800955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.271 [2024-12-09 06:10:51.800965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.271 [2024-12-09 06:10:51.805054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.271 [2024-12-09 06:10:51.805082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.805103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.809151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.809178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.809188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.813316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.813352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.813362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.817464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.817492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.817503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.821627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.821656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.821668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.825756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.825786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.825796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.829909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.829938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.829949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.834069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.834106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.834117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.838210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.838240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.838250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.842388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.842417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.842428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.846571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.846599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.846620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.272 [2024-12-09 06:10:51.850770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.272 [2024-12-09 06:10:51.850799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.272 [2024-12-09 06:10:51.850811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.854948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.854976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.854986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.859083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.859119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.859129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.863311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.863339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.863349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.867446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.867473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.867483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.871566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.871594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.871604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.875698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.875725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.875736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.879824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.879851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.879862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.883908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.883936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.883946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.887976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.888005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.888015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.532 [2024-12-09 06:10:51.892066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.532 [2024-12-09 06:10:51.892107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.532 [2024-12-09 06:10:51.892118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.896184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.896213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.896223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.900234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.900262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.900272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.904326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.904355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.904365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.908416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.908444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.908455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.912492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.912519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.912529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.916578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.916606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.916617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.920673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.920700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.920711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.924795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.924822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.924833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.928937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.928964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.928974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.933028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.933055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.933066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.937172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.937201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.937211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.941182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.941209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.941219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.945271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.945299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.945309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.949366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.949410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.949420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.953529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.953557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.953567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.957669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.957698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.957708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.961897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.961924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.961934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.965988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.966016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.966026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.970108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.970135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.970145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.974263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.974292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.533 [2024-12-09 06:10:51.974303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.533 [2024-12-09 06:10:51.978448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.533 [2024-12-09 06:10:51.978478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:51.978489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:51.982607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:51.982634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:51.982644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:51.986774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:51.986801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:51.986811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:51.990879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:51.990907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:51.990917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:51.995008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:51.995036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:51.995046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:51.999137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:51.999164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:51.999174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.003262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.003289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.003299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.007331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.007359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.007369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.011522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.011550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.011560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.015645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.015673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.015684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.019775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.019802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.019812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.023946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.023974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.023984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.028069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.028105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.028115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.032137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.032164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.032174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.036274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.036302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.036313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.040373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.040402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.040411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.044450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.044478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.044489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.048531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.048559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.048569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.052696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.052724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.052735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.056864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.056891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.056900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.060980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.061009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.061019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.064987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.534 [2024-12-09 06:10:52.065014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.534 [2024-12-09 06:10:52.065024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:11:57.534 [2024-12-09 06:10:52.069099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85c620) 01:11:57.535 [2024-12-09 06:10:52.069124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:11:57.535 [2024-12-09 06:10:52.069135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:11:57.535 7432.00 IOPS, 929.00 MiB/s 01:11:57.535 Latency(us) 01:11:57.535 [2024-12-09T06:10:52.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:57.535 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:11:57.535 nvme0n1 : 2.00 7431.36 928.92 0.00 0.00 2150.54 1947.66 9580.36 01:11:57.535 [2024-12-09T06:10:52.122Z] =================================================================================================================== 01:11:57.535 [2024-12-09T06:10:52.122Z] Total : 7431.36 928.92 0.00 0.00 2150.54 1947.66 9580.36 01:11:57.535 { 01:11:57.535 "results": [ 01:11:57.535 { 01:11:57.535 "job": "nvme0n1", 01:11:57.535 "core_mask": "0x2", 01:11:57.535 "workload": "randread", 01:11:57.535 "status": "finished", 01:11:57.535 "queue_depth": 16, 01:11:57.535 "io_size": 131072, 01:11:57.535 "runtime": 2.002326, 01:11:57.535 "iops": 7431.357331423555, 01:11:57.535 "mibps": 928.9196664279443, 01:11:57.535 "io_failed": 0, 01:11:57.535 "io_timeout": 0, 01:11:57.535 "avg_latency_us": 2150.5419458479078, 01:11:57.535 "min_latency_us": 1947.6562248995983, 01:11:57.535 "max_latency_us": 9580.363052208835 01:11:57.535 } 01:11:57.535 ], 01:11:57.535 "core_count": 1 01:11:57.535 } 01:11:57.535 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:11:57.535 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:11:57.535 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:11:57.535 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:11:57.535 | .driver_specific 01:11:57.535 | .nvme_error 01:11:57.535 | .status_code 01:11:57.535 | .command_transient_transport_error' 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 480 > 0 )) 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79880 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79880 ']' 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79880 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79880 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:11:57.793 killing process with pid 79880 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79880' 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79880 01:11:57.793 Received shutdown signal, test time was about 2.000000 seconds 01:11:57.793 01:11:57.793 Latency(us) 01:11:57.793 [2024-12-09T06:10:52.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:57.793 [2024-12-09T06:10:52.380Z] =================================================================================================================== 01:11:57.793 [2024-12-09T06:10:52.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:57.793 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79880 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79940 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79940 /var/tmp/bperf.sock 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79940 ']' 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:58.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:58.052 06:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:58.311 [2024-12-09 06:10:52.651813] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:58.311 [2024-12-09 06:10:52.651886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79940 ] 01:11:58.311 [2024-12-09 06:10:52.784204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:58.311 [2024-12-09 06:10:52.839650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:58.569 [2024-12-09 06:10:52.909922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:11:59.137 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:59.137 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:11:59.137 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:11:59.137 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:11:59.137 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:11:59.137 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:59.137 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:59.395 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:59.395 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:59.395 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:11:59.395 nvme0n1 01:11:59.395 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:11:59.395 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:59.395 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:11:59.395 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:59.395 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:11:59.395 06:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:11:59.654 Running I/O for 2 seconds... 01:11:59.654 [2024-12-09 06:10:54.092159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef7100 01:11:59.654 [2024-12-09 06:10:54.093314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.654 [2024-12-09 06:10:54.093371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:11:59.654 [2024-12-09 06:10:54.104030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef7970 01:11:59.654 [2024-12-09 06:10:54.105256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.105287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.115794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef81e0 01:11:59.655 [2024-12-09 06:10:54.116966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.116993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.127383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef8a50 01:11:59.655 [2024-12-09 06:10:54.128561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.128587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.139057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef92c0 01:11:59.655 [2024-12-09 06:10:54.140223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.140248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.150658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef9b30 01:11:59.655 [2024-12-09 06:10:54.151812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.151838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.162326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efa3a0 01:11:59.655 [2024-12-09 06:10:54.163464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.163490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.173943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efac10 01:11:59.655 [2024-12-09 06:10:54.175064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.175100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.185626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efb480 01:11:59.655 [2024-12-09 06:10:54.186708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.186733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.197172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efbcf0 01:11:59.655 [2024-12-09 06:10:54.198245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.198274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.208898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efc560 01:11:59.655 [2024-12-09 06:10:54.209970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.209997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.220622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efcdd0 01:11:59.655 [2024-12-09 06:10:54.221696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.221723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:11:59.655 [2024-12-09 06:10:54.232301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efd640 01:11:59.655 [2024-12-09 06:10:54.233320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.655 [2024-12-09 06:10:54.233353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.244229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efdeb0 01:11:59.914 [2024-12-09 06:10:54.245243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.245268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.255942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efe720 01:11:59.914 [2024-12-09 06:10:54.256953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.256979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.267736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eff3c8 01:11:59.914 [2024-12-09 06:10:54.268723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.268749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.284156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eff3c8 01:11:59.914 [2024-12-09 06:10:54.286101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.286135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.295852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efe720 01:11:59.914 [2024-12-09 06:10:54.297790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.297817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.307602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efdeb0 01:11:59.914 [2024-12-09 06:10:54.309528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.309555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.319347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efd640 01:11:59.914 [2024-12-09 06:10:54.321251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.321277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.331157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efcdd0 01:11:59.914 [2024-12-09 06:10:54.333042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.333068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.342908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efc560 01:11:59.914 [2024-12-09 06:10:54.344796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.344819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:11:59.914 [2024-12-09 06:10:54.354683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efbcf0 01:11:59.914 [2024-12-09 06:10:54.356550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.914 [2024-12-09 06:10:54.356573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.366541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efb480 01:11:59.915 [2024-12-09 06:10:54.368381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.368405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.378224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efac10 01:11:59.915 [2024-12-09 06:10:54.380046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.380070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.389958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efa3a0 01:11:59.915 [2024-12-09 06:10:54.391779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.391803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.401684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef9b30 01:11:59.915 [2024-12-09 06:10:54.403472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.403498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.413405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef92c0 01:11:59.915 [2024-12-09 06:10:54.415175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.415201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.425276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef8a50 01:11:59.915 [2024-12-09 06:10:54.427013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.427039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.436988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef81e0 01:11:59.915 [2024-12-09 06:10:54.438747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.438772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.448962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef7970 01:11:59.915 [2024-12-09 06:10:54.450723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.450747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.460949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef7100 01:11:59.915 [2024-12-09 06:10:54.462679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.462706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.473245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef6890 01:11:59.915 [2024-12-09 06:10:54.474955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.474981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.485487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef6020 01:11:59.915 [2024-12-09 06:10:54.487188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:11:59.915 [2024-12-09 06:10:54.487214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:11:59.915 [2024-12-09 06:10:54.498311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef57b0 01:12:00.174 [2024-12-09 06:10:54.499980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.174 [2024-12-09 06:10:54.500008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:12:00.174 [2024-12-09 06:10:54.510382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef4f40 01:12:00.174 [2024-12-09 06:10:54.512040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.174 [2024-12-09 06:10:54.512067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.522397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef46d0 01:12:00.175 [2024-12-09 06:10:54.524059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.524092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.534394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef3e60 01:12:00.175 [2024-12-09 06:10:54.536042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.536069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.546136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef35f0 01:12:00.175 [2024-12-09 06:10:54.547750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.547776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.557836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef2d80 01:12:00.175 [2024-12-09 06:10:54.559453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.559477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.569518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef2510 01:12:00.175 [2024-12-09 06:10:54.571113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.571145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.581270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef1ca0 01:12:00.175 [2024-12-09 06:10:54.582849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.582875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.593000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef1430 01:12:00.175 [2024-12-09 06:10:54.594576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.594602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.604618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef0bc0 01:12:00.175 [2024-12-09 06:10:54.606188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.606214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.616468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef0350 01:12:00.175 [2024-12-09 06:10:54.617987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.618017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.628283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eefae0 01:12:00.175 [2024-12-09 06:10:54.629788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.629814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.639969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eef270 01:12:00.175 [2024-12-09 06:10:54.641479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.641505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.651651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eeea00 01:12:00.175 [2024-12-09 06:10:54.653139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.653163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.663486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eee190 01:12:00.175 [2024-12-09 06:10:54.664945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.664975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.675363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eed920 01:12:00.175 [2024-12-09 06:10:54.676808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.676839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.687316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eed0b0 01:12:00.175 [2024-12-09 06:10:54.688743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.688769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.699249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eec840 01:12:00.175 [2024-12-09 06:10:54.700685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.700710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.711246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eebfd0 01:12:00.175 [2024-12-09 06:10:54.712656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.712681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.723235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eeb760 01:12:00.175 [2024-12-09 06:10:54.724610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.724636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.735351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eeaef0 01:12:00.175 [2024-12-09 06:10:54.736722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.175 [2024-12-09 06:10:54.736748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:12:00.175 [2024-12-09 06:10:54.747519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eea680 01:12:00.175 [2024-12-09 06:10:54.748866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.176 [2024-12-09 06:10:54.748892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:12:00.176 [2024-12-09 06:10:54.759676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee9e10 01:12:00.436 [2024-12-09 06:10:54.761015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.436 [2024-12-09 06:10:54.761042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:12:00.436 [2024-12-09 06:10:54.771941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee95a0 01:12:00.436 [2024-12-09 06:10:54.773265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.436 [2024-12-09 06:10:54.773289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:12:00.436 [2024-12-09 06:10:54.783795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee8d30 01:12:00.436 [2024-12-09 06:10:54.785106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.436 [2024-12-09 06:10:54.785133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:12:00.436 [2024-12-09 06:10:54.795854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee84c0 01:12:00.436 [2024-12-09 06:10:54.797145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.436 [2024-12-09 06:10:54.797169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:12:00.436 [2024-12-09 06:10:54.807731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee7c50 01:12:00.436 [2024-12-09 06:10:54.809010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.436 [2024-12-09 06:10:54.809036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:12:00.436 [2024-12-09 06:10:54.819883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee73e0 01:12:00.436 [2024-12-09 06:10:54.821145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.436 [2024-12-09 06:10:54.821170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:12:00.436 [2024-12-09 06:10:54.831764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee6b70 01:12:00.436 [2024-12-09 06:10:54.833022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.436 [2024-12-09 06:10:54.833048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:12:00.436 [2024-12-09 06:10:54.843816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee6300 01:12:00.436 [2024-12-09 06:10:54.845046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.436 [2024-12-09 06:10:54.845072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:12:00.436 [2024-12-09 06:10:54.855600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee5a90 01:12:00.437 [2024-12-09 06:10:54.856817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.856844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.867423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee5220 01:12:00.437 [2024-12-09 06:10:54.868648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.868673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.879296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee49b0 01:12:00.437 [2024-12-09 06:10:54.880482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.880508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.891029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee4140 01:12:00.437 [2024-12-09 06:10:54.892218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.892244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.902744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee38d0 01:12:00.437 [2024-12-09 06:10:54.903928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.903953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.914487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee3060 01:12:00.437 [2024-12-09 06:10:54.915607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.915632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.926198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee27f0 01:12:00.437 [2024-12-09 06:10:54.927304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.927329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.937927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee1f80 01:12:00.437 [2024-12-09 06:10:54.939037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.939062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.949732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee1710 01:12:00.437 [2024-12-09 06:10:54.950814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.950840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.961564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee0ea0 01:12:00.437 [2024-12-09 06:10:54.962637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.962664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.973336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee0630 01:12:00.437 [2024-12-09 06:10:54.974395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.974421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.985027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016edfdc0 01:12:00.437 [2024-12-09 06:10:54.986091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.986126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:54.996790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016edf550 01:12:00.437 [2024-12-09 06:10:54.997823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:54.997850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:55.008600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016edece0 01:12:00.437 [2024-12-09 06:10:55.009612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.437 [2024-12-09 06:10:55.009638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:12:00.437 [2024-12-09 06:10:55.020587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ede470 01:12:00.696 [2024-12-09 06:10:55.021585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.021612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.037503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eddc00 01:12:00.696 [2024-12-09 06:10:55.039462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.039489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.049238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ede470 01:12:00.696 [2024-12-09 06:10:55.051186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.051211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.060959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016edece0 01:12:00.696 [2024-12-09 06:10:55.062935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.062967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.072781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016edf550 01:12:00.696 [2024-12-09 06:10:55.074714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.074740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:12:00.696 21254.00 IOPS, 83.02 MiB/s [2024-12-09T06:10:55.283Z] [2024-12-09 06:10:55.085927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016edfdc0 01:12:00.696 [2024-12-09 06:10:55.087829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.087856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.097687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee0630 01:12:00.696 [2024-12-09 06:10:55.099577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.099603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.109446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee0ea0 01:12:00.696 [2024-12-09 06:10:55.111309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.111333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.121191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee1710 01:12:00.696 [2024-12-09 06:10:55.123044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.123070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.132854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee1f80 01:12:00.696 [2024-12-09 06:10:55.134701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.134726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.144593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee27f0 01:12:00.696 [2024-12-09 06:10:55.146431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.146456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.156362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee3060 01:12:00.696 [2024-12-09 06:10:55.158184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.158209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.168147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee38d0 01:12:00.696 [2024-12-09 06:10:55.169948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.169973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.179797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee4140 01:12:00.696 [2024-12-09 06:10:55.181586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.181611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.191452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee49b0 01:12:00.696 [2024-12-09 06:10:55.193199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.696 [2024-12-09 06:10:55.193224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:12:00.696 [2024-12-09 06:10:55.203207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee5220 01:12:00.696 [2024-12-09 06:10:55.204914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.697 [2024-12-09 06:10:55.204939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:12:00.697 [2024-12-09 06:10:55.214793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee5a90 01:12:00.697 [2024-12-09 06:10:55.216514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.697 [2024-12-09 06:10:55.216540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:12:00.697 [2024-12-09 06:10:55.226446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee6300 01:12:00.697 [2024-12-09 06:10:55.228158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.697 [2024-12-09 06:10:55.228182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:00.697 [2024-12-09 06:10:55.238134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee6b70 01:12:00.697 [2024-12-09 06:10:55.239827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.697 [2024-12-09 06:10:55.239858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:12:00.697 [2024-12-09 06:10:55.249833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee73e0 01:12:00.697 [2024-12-09 06:10:55.251517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.697 [2024-12-09 06:10:55.251545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:12:00.697 [2024-12-09 06:10:55.261506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee7c50 01:12:00.697 [2024-12-09 06:10:55.263191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.697 [2024-12-09 06:10:55.263217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:12:00.697 [2024-12-09 06:10:55.273393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee84c0 01:12:00.697 [2024-12-09 06:10:55.275040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.697 [2024-12-09 06:10:55.275065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.285281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee8d30 01:12:00.955 [2024-12-09 06:10:55.286920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.286946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.297111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee95a0 01:12:00.955 [2024-12-09 06:10:55.298741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.298765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.308870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ee9e10 01:12:00.955 [2024-12-09 06:10:55.310504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.310530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.320554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eea680 01:12:00.955 [2024-12-09 06:10:55.322133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.322158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.332180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eeaef0 01:12:00.955 [2024-12-09 06:10:55.333733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.333758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.343930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eeb760 01:12:00.955 [2024-12-09 06:10:55.345502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.345529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.355553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eebfd0 01:12:00.955 [2024-12-09 06:10:55.357085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.357127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.367237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eec840 01:12:00.955 [2024-12-09 06:10:55.368753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.368780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.379073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eed0b0 01:12:00.955 [2024-12-09 06:10:55.380603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.380629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:12:00.955 [2024-12-09 06:10:55.390850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eed920 01:12:00.955 [2024-12-09 06:10:55.392354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.955 [2024-12-09 06:10:55.392378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.402609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eee190 01:12:00.956 [2024-12-09 06:10:55.404105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.404139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.414304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eeea00 01:12:00.956 [2024-12-09 06:10:55.415768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.415794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.425911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eef270 01:12:00.956 [2024-12-09 06:10:55.427366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.427391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.437522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eefae0 01:12:00.956 [2024-12-09 06:10:55.438939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.438964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.449218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef0350 01:12:00.956 [2024-12-09 06:10:55.450637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.450664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.460902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef0bc0 01:12:00.956 [2024-12-09 06:10:55.462322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.462348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.472538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef1430 01:12:00.956 [2024-12-09 06:10:55.473936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.473962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.484233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef1ca0 01:12:00.956 [2024-12-09 06:10:55.485620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.485646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.496280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef2510 01:12:00.956 [2024-12-09 06:10:55.497636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.497661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.508463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef2d80 01:12:00.956 [2024-12-09 06:10:55.509811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.509837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.520700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef35f0 01:12:00.956 [2024-12-09 06:10:55.522035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.522061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:12:00.956 [2024-12-09 06:10:55.532727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef3e60 01:12:00.956 [2024-12-09 06:10:55.534051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:00.956 [2024-12-09 06:10:55.534076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:12:01.214 [2024-12-09 06:10:55.544646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef46d0 01:12:01.214 [2024-12-09 06:10:55.545932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.214 [2024-12-09 06:10:55.545961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:12:01.214 [2024-12-09 06:10:55.556631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef4f40 01:12:01.214 [2024-12-09 06:10:55.557907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.214 [2024-12-09 06:10:55.557935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:12:01.214 [2024-12-09 06:10:55.568583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef57b0 01:12:01.214 [2024-12-09 06:10:55.569842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.214 [2024-12-09 06:10:55.569868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:12:01.214 [2024-12-09 06:10:55.580514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef6020 01:12:01.214 [2024-12-09 06:10:55.581796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.214 [2024-12-09 06:10:55.581823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:12:01.214 [2024-12-09 06:10:55.592313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef6890 01:12:01.214 [2024-12-09 06:10:55.593540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.214 [2024-12-09 06:10:55.593567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:12:01.214 [2024-12-09 06:10:55.604212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef7100 01:12:01.214 [2024-12-09 06:10:55.605436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.214 [2024-12-09 06:10:55.605462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:01.214 [2024-12-09 06:10:55.616065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef7970 01:12:01.214 [2024-12-09 06:10:55.617275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.214 [2024-12-09 06:10:55.617300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:12:01.214 [2024-12-09 06:10:55.627724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef81e0 01:12:01.215 [2024-12-09 06:10:55.628907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.628933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.639674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef8a50 01:12:01.215 [2024-12-09 06:10:55.640846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.640871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.651332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef92c0 01:12:01.215 [2024-12-09 06:10:55.652484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.652509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.663170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef9b30 01:12:01.215 [2024-12-09 06:10:55.664330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.664355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.674928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efa3a0 01:12:01.215 [2024-12-09 06:10:55.676061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.676100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.686626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efac10 01:12:01.215 [2024-12-09 06:10:55.687736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.687763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.698332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efb480 01:12:01.215 [2024-12-09 06:10:55.699436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.699461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.710021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efbcf0 01:12:01.215 [2024-12-09 06:10:55.711121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.711146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.721607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efc560 01:12:01.215 [2024-12-09 06:10:55.722686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.722711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.733283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efcdd0 01:12:01.215 [2024-12-09 06:10:55.734349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.734375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.744906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efd640 01:12:01.215 [2024-12-09 06:10:55.745958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.745983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.756577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efdeb0 01:12:01.215 [2024-12-09 06:10:55.757590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.757617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.768222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efe720 01:12:01.215 [2024-12-09 06:10:55.769214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.769240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.780139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eff3c8 01:12:01.215 [2024-12-09 06:10:55.781128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.781153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:12:01.215 [2024-12-09 06:10:55.796701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016eff3c8 01:12:01.215 [2024-12-09 06:10:55.798640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.215 [2024-12-09 06:10:55.798666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.808634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efe720 01:12:01.474 [2024-12-09 06:10:55.810553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.810580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.820486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efdeb0 01:12:01.474 [2024-12-09 06:10:55.822395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.822421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.832208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efd640 01:12:01.474 [2024-12-09 06:10:55.834094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.834127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.844123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efcdd0 01:12:01.474 [2024-12-09 06:10:55.846003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.846030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.856098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efc560 01:12:01.474 [2024-12-09 06:10:55.857975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.858001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.868215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efbcf0 01:12:01.474 [2024-12-09 06:10:55.870059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.870094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.880343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efb480 01:12:01.474 [2024-12-09 06:10:55.882184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.882211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.892429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efac10 01:12:01.474 [2024-12-09 06:10:55.894260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.894284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.904267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016efa3a0 01:12:01.474 [2024-12-09 06:10:55.906081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.906115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.916248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef9b30 01:12:01.474 [2024-12-09 06:10:55.918039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.918065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.928305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef92c0 01:12:01.474 [2024-12-09 06:10:55.930096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.930121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.940284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef8a50 01:12:01.474 [2024-12-09 06:10:55.942047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.942073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.952213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef81e0 01:12:01.474 [2024-12-09 06:10:55.953975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.954000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.964272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef7970 01:12:01.474 [2024-12-09 06:10:55.966017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.966045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.976165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef7100 01:12:01.474 [2024-12-09 06:10:55.977885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.977912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.987865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef6890 01:12:01.474 [2024-12-09 06:10:55.989573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.474 [2024-12-09 06:10:55.989598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:12:01.474 [2024-12-09 06:10:55.999540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef6020 01:12:01.475 [2024-12-09 06:10:56.001207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.475 [2024-12-09 06:10:56.001233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:12:01.475 [2024-12-09 06:10:56.011351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef57b0 01:12:01.475 [2024-12-09 06:10:56.013023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.475 [2024-12-09 06:10:56.013048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:12:01.475 [2024-12-09 06:10:56.023040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef4f40 01:12:01.475 [2024-12-09 06:10:56.024682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.475 [2024-12-09 06:10:56.024713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:12:01.475 [2024-12-09 06:10:56.034721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef46d0 01:12:01.475 [2024-12-09 06:10:56.036357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.475 [2024-12-09 06:10:56.036388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:12:01.475 [2024-12-09 06:10:56.046520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef3e60 01:12:01.475 [2024-12-09 06:10:56.048146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.475 [2024-12-09 06:10:56.048170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:12:01.475 [2024-12-09 06:10:56.058442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef35f0 01:12:01.733 [2024-12-09 06:10:56.060040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.733 [2024-12-09 06:10:56.060069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:12:01.733 [2024-12-09 06:10:56.070384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864b70) with pdu=0x200016ef2d80 01:12:01.733 [2024-12-09 06:10:56.071963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:12:01.733 [2024-12-09 06:10:56.071996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:12:01.733 21316.50 IOPS, 83.27 MiB/s 01:12:01.733 Latency(us) 01:12:01.733 [2024-12-09T06:10:56.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:12:01.733 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:12:01.733 nvme0n1 : 2.01 21324.61 83.30 0.00 0.00 5997.59 5606.09 22634.92 01:12:01.733 [2024-12-09T06:10:56.320Z] =================================================================================================================== 01:12:01.733 [2024-12-09T06:10:56.320Z] Total : 21324.61 83.30 0.00 0.00 5997.59 5606.09 22634.92 01:12:01.733 { 01:12:01.733 "results": [ 01:12:01.733 { 01:12:01.733 "job": "nvme0n1", 01:12:01.733 "core_mask": "0x2", 01:12:01.733 "workload": "randwrite", 01:12:01.733 "status": "finished", 01:12:01.733 "queue_depth": 128, 01:12:01.733 "io_size": 4096, 01:12:01.733 "runtime": 2.005242, 01:12:01.733 "iops": 21324.608201902814, 01:12:01.733 "mibps": 83.29925078868287, 01:12:01.733 "io_failed": 0, 01:12:01.733 "io_timeout": 0, 01:12:01.733 "avg_latency_us": 5997.587939240886, 01:12:01.733 "min_latency_us": 5606.0915662650605, 01:12:01.733 "max_latency_us": 22634.923694779118 01:12:01.733 } 01:12:01.733 ], 01:12:01.733 "core_count": 1 01:12:01.733 } 01:12:01.733 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:12:01.733 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:12:01.733 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:12:01.733 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:12:01.733 | .driver_specific 01:12:01.734 | .nvme_error 01:12:01.734 | .status_code 01:12:01.734 | .command_transient_transport_error' 01:12:01.734 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 01:12:01.734 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79940 01:12:01.734 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79940 ']' 01:12:01.734 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79940 01:12:01.734 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:12:01.734 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:01.734 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79940 01:12:01.993 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:12:01.993 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:12:01.993 killing process with pid 79940 01:12:01.993 Received shutdown signal, test time was about 2.000000 seconds 01:12:01.993 01:12:01.993 Latency(us) 01:12:01.993 [2024-12-09T06:10:56.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:12:01.993 [2024-12-09T06:10:56.580Z] =================================================================================================================== 01:12:01.993 [2024-12-09T06:10:56.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:12:01.993 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79940' 01:12:01.993 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79940 01:12:01.993 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79940 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79995 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79995 /var/tmp/bperf.sock 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79995 ']' 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:02.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:12:02.252 06:10:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 01:12:02.252 I/O size of 131072 is greater than zero copy threshold (65536). 01:12:02.252 Zero copy mechanism will not be used. 01:12:02.252 [2024-12-09 06:10:56.641630] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:12:02.252 [2024-12-09 06:10:56.641694] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79995 ] 01:12:02.252 [2024-12-09 06:10:56.790849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:12:02.512 [2024-12-09 06:10:56.846790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:02.512 [2024-12-09 06:10:56.916847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:12:03.079 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:03.079 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:12:03.079 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:12:03.079 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:12:03.338 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:12:03.338 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:03.338 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:12:03.338 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:03.338 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:12:03.338 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:12:03.598 nvme0n1 01:12:03.598 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:12:03.598 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:03.598 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:12:03.598 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:03.598 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:12:03.598 06:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:12:03.598 I/O size of 131072 is greater than zero copy threshold (65536). 01:12:03.598 Zero copy mechanism will not be used. 01:12:03.598 Running I/O for 2 seconds... 01:12:03.598 [2024-12-09 06:10:58.070594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.598 [2024-12-09 06:10:58.070760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.598 [2024-12-09 06:10:58.070794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.598 [2024-12-09 06:10:58.075927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.598 [2024-12-09 06:10:58.076199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.598 [2024-12-09 06:10:58.076230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.598 [2024-12-09 06:10:58.081132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.598 [2024-12-09 06:10:58.081313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.598 [2024-12-09 06:10:58.081341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.598 [2024-12-09 06:10:58.086214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.598 [2024-12-09 06:10:58.086405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.598 [2024-12-09 06:10:58.086427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.598 [2024-12-09 06:10:58.091260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.598 [2024-12-09 06:10:58.091468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.091489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.096343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.096569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.096590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.101394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.101571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.101592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.106562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.106738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.106759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.111630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.111817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.111838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.116715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.116940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.116961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.122084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.122287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.122308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.127231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.127409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.127430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.132286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.132486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.132507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.137429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.137609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.137630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.142497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.142713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.142741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.147569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.147739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.147760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.152705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.152896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.152916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.157906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.158084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.158119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.163014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.163223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.163245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.168136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.168273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.168294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.173183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.173328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.173357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.599 [2024-12-09 06:10:58.178270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.599 [2024-12-09 06:10:58.178458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.599 [2024-12-09 06:10:58.178479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.183469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.183648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.183669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.188643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.188839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.188859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.193859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.194031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.194052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.198933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.199142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.199163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.204211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.204374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.204395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.209328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.209518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.209539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.214425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.214607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.214628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.219738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.219931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.219951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.224823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.224991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.225012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.859 [2024-12-09 06:10:58.229855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.859 [2024-12-09 06:10:58.230033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.859 [2024-12-09 06:10:58.230053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.235200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.235425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.235702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.240470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.240674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.240823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.245749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.245996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.246283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.251089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.251344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.251586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.256409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.256664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.256825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.261782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.262025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.262442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.267275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.267536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.267767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.272482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.272679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.272853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.277745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.277976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.278208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.283050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.283316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.283560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.288350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.288561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.288706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.293555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.293741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.293899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.298736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.298931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.299259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.304214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.304486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.304730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.309520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.309759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.309993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.314816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.315073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.315233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.320013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.320220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.320398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.325279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.325508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.325663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.330535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.330736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.330935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.335757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.336018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.336040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.340975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.341187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.341209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.346428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.346677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.346900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.351792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.352051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.352262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.357020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.357293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.357543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.362257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.362478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.362649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.367524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.367768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.368017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.372763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.860 [2024-12-09 06:10:58.372959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.860 [2024-12-09 06:10:58.373154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.860 [2024-12-09 06:10:58.377962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.378195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.378394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.383346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.383566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.383750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.387947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.388345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.388606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.393175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.393840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.393876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.398564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.399049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.399076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.403595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.404101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.404127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.408763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.409399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.409428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.414086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.414614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.414640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.419217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.419708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.419734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.424328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.424811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.424837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.429433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.429933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.429958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.434505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.435006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.435046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:03.861 [2024-12-09 06:10:58.439582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:03.861 [2024-12-09 06:10:58.440180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:03.861 [2024-12-09 06:10:58.440208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.444748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.445284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.445310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.449963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.450478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.450517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.455138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.455648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.455675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.460216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.460718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.460743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.465386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.465911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.465941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.470529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.471119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.471145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.475624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.476119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.476155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.480736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.481207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.481232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.485934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.486573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.486602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.491148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.491642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.491668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.496278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.496780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.496806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.501457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.502087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.502126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.506842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.507355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.507381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.511980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.512457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.512485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.517066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.517693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.517737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.522332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.522832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.522859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.527454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.527945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.527971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.532635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.533246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.533274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.537760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.538273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.538299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.542940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.543446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.543471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.548111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.548610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.548635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.553176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.553658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.122 [2024-12-09 06:10:58.553683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.122 [2024-12-09 06:10:58.558224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.122 [2024-12-09 06:10:58.558710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.558736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.563342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.563834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.563859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.568415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.568912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.568938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.573678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.574177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.574204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.578853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.579351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.579377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.583942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.584447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.584467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.589254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.589790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.589824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.594816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.594881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.594904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.599714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.599775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.599797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.604851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.604914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.604936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.609988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.610169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.610191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.615310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.615370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.615392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.620453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.620518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.620540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.625552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.625732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.625753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.630863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.630919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.630940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.635899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.635966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.635987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.640845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.641033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.641054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.646145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.646201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.646222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.651144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.651201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.651222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.656176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.656234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.656254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.661109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.661169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.661190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.666213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.666270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.666292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.671337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.671400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.671421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.676286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.676346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.676367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.681262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.681316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.681336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.686319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.686377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.686399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.691297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.123 [2024-12-09 06:10:58.691360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.123 [2024-12-09 06:10:58.691380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.123 [2024-12-09 06:10:58.696307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.124 [2024-12-09 06:10:58.696386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.124 [2024-12-09 06:10:58.696407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.124 [2024-12-09 06:10:58.701230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.124 [2024-12-09 06:10:58.701311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.124 [2024-12-09 06:10:58.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.384 [2024-12-09 06:10:58.706225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.384 [2024-12-09 06:10:58.706298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.384 [2024-12-09 06:10:58.706320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.384 [2024-12-09 06:10:58.711322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.384 [2024-12-09 06:10:58.711412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.384 [2024-12-09 06:10:58.711434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.384 [2024-12-09 06:10:58.716378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.384 [2024-12-09 06:10:58.716437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.384 [2024-12-09 06:10:58.716458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.384 [2024-12-09 06:10:58.721291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.384 [2024-12-09 06:10:58.721363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.384 [2024-12-09 06:10:58.721385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.384 [2024-12-09 06:10:58.726404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.384 [2024-12-09 06:10:58.726457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.384 [2024-12-09 06:10:58.726478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.384 [2024-12-09 06:10:58.731492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.384 [2024-12-09 06:10:58.731680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.384 [2024-12-09 06:10:58.731701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.384 [2024-12-09 06:10:58.736697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.384 [2024-12-09 06:10:58.736792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.384 [2024-12-09 06:10:58.736813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.384 [2024-12-09 06:10:58.741568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.384 [2024-12-09 06:10:58.741631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.384 [2024-12-09 06:10:58.741652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.384 [2024-12-09 06:10:58.746622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.746845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.746865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.751929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.752030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.752051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.757011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.757133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.757155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.762064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.762268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.762288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.767314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.767511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.767662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.772617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.772821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.773144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.777857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.778050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.778337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.783250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.783463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.783625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.788474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.788693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.788836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.793748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.793954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.794157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.799041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.799289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.799457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.804380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.804631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.804792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.809659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.809848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.810006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.814994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.815231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.815365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.820382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.820659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.820906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.825736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.825965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.826207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.830929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.831146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.831315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.836167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.836368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.836514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.841473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.841715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.841876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.846801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.847115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.847299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.852131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.852336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.852508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.857357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.857640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.857801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.862631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.862828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.863018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.867873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.868080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.868257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.873171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.873428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.873652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.878510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.878578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.878600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.883553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.385 [2024-12-09 06:10:58.883649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.385 [2024-12-09 06:10:58.883671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.385 [2024-12-09 06:10:58.888564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.888752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.888773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.893849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.893943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.893966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.898916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.899039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.899060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.903921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.904097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.904119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.909107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.909221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.909242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.914230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.914294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.914315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.919330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.919410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.919432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.924399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.924454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.924475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.929409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.929539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.929560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.934465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.934559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.934581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.939532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.939741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.939762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.944783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.944880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.944901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.949865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.950054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.950076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.954920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.955110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.955133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.960138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.960218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.960239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.386 [2024-12-09 06:10:58.965192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.386 [2024-12-09 06:10:58.965270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.386 [2024-12-09 06:10:58.965291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.646 [2024-12-09 06:10:58.970235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.646 [2024-12-09 06:10:58.970354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.646 [2024-12-09 06:10:58.970377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.646 [2024-12-09 06:10:58.975313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.646 [2024-12-09 06:10:58.975409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:58.975431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:58.980348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:58.980428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:58.980449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:58.985474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:58.985668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:58.985689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:58.990652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:58.990785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:58.990806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:58.995665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:58.995783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:58.995804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.000740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.000926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.000947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.006033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.006143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.006164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.011068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.011160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.011181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.016210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.016269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.016291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.021213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.021288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.021310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.026204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.026371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.026392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.031312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.031411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.031433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.036378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.036462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.036484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.041504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.041560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.041582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.046616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.046717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.046739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.051728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.051923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.051944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.056895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.057047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.057069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.061881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.061969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.061990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.647 5961.00 IOPS, 745.12 MiB/s [2024-12-09T06:10:59.234Z] [2024-12-09 06:10:59.067786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.067967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.067988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.072938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.073029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.073049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.077934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.078024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.078044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.082984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.083058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.083079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.088043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.088259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.088278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.093299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.093414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.093434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.098345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.098402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.098422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.103463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.103648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.103668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.647 [2024-12-09 06:10:59.108619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.647 [2024-12-09 06:10:59.108780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.647 [2024-12-09 06:10:59.108800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.113621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.113709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.113730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.118655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.118844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.118864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.123803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.123894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.123914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.128788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.128882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.128902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.133778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.134040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.134061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.139108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.139201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.139221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.144047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.144188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.144209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.149047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.149163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.149183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.154008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.154094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.154124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.159016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.159118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.159151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.164008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.164108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.164139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.169055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.169268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.169287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.174235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.174320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.174340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.179239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.179350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.179370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.184148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.184265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.184286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.189071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.189185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.189205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.194010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.194123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.194144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.198942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.199101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.199122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.203962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.204164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.204184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.209173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.209261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.209281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.214099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.214187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.214207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.219069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.219176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.219196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.223995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.224196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.224216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.648 [2024-12-09 06:10:59.229320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.648 [2024-12-09 06:10:59.229469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.648 [2024-12-09 06:10:59.229491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.909 [2024-12-09 06:10:59.234591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.909 [2024-12-09 06:10:59.234708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.909 [2024-12-09 06:10:59.234728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.909 [2024-12-09 06:10:59.239627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.909 [2024-12-09 06:10:59.239859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.909 [2024-12-09 06:10:59.239879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.909 [2024-12-09 06:10:59.244896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.909 [2024-12-09 06:10:59.245030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.909 [2024-12-09 06:10:59.245050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.250116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.250246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.250268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.255287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.255407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.255428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.260469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.260562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.260583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.265589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.265761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.265782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.270714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.270935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.270955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.276028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.276140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.276160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.281103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.281259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.281278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.286073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.286266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.286287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.291070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.291286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.291306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.296363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.296429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.296449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.301464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.301590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.301612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.306494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.306724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.306744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.311733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.311832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.311852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.316841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.316951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.316971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.321880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.322113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.322134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.327076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.327190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.327210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.332179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.332267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.332287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.337272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.337365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.337385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.342444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.342519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.342539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.347556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.347661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.347681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.352588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.352689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.352708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.357750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.357932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.357953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.363075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.363210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.363231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.368119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.368199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.368219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.373160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.373271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.373292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.378120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.378195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.378215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.910 [2024-12-09 06:10:59.383123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.910 [2024-12-09 06:10:59.383215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.910 [2024-12-09 06:10:59.383235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.388145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.388219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.388240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.393061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.393176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.393197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.398154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.398299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.398318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.403199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.403304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.403323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.408202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.408360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.408379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.413161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.413224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.413243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.418136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.418235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.418256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.423131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.423215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.423235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.428145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.428238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.428258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.433030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.433142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.433162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.438069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.438162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.438182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.443073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.443186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.443206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.448044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.448153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.448173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.453005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.453110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.453141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.458037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.458241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.458261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.463314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.463489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.463679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.468549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.468839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.469003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.473772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.474012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.474276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.479064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.479356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.479501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.484246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.484480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.484632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:04.911 [2024-12-09 06:10:59.489410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:04.911 [2024-12-09 06:10:59.489630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:04.911 [2024-12-09 06:10:59.489788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.494587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.494819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.495026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.499863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.500072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.500300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.505021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.505320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.505510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.510302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.510519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.510700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.515601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.515772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.515794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.520672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.520920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.520941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.525988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.526166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.526188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.530994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.531202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.531223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.536032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.536233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.536254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.541235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.541425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.541445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.546341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.546516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.546537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.550770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.551156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.551183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.555666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.556298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.556325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.172 [2024-12-09 06:10:59.560802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.172 [2024-12-09 06:10:59.561276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.172 [2024-12-09 06:10:59.561301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.565862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.566374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.566400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.571023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.571646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.571674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.576273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.576765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.576790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.581328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.581845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.581870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.586434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.587075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.587112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.591744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.592233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.592258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.596876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.597373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.597417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.602062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.602661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.602691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.607101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.607173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.607194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.612265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.612321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.612353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.617303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.617364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.617401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.622540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.622599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.622619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.627747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.627802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.627824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.632858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.633022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.633044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.638099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.638157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.638178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.643157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.643214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.643235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.648302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.648378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.648399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.653397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.653452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.653474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.658463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.658518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.658538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.663576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.663659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.663680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.668624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.668819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.668840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.673975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.674031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.674053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.678932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.678986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.679007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.684003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.684187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.684208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.689331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.689395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.689415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.694400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.694478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.694507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.173 [2024-12-09 06:10:59.699428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.173 [2024-12-09 06:10:59.699479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.173 [2024-12-09 06:10:59.699500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.704526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.704577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.704599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.709588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.709644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.709665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.714697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.714766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.714786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.719760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.719946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.719966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.724955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.725019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.725039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.729956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.730012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.730033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.734981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.735033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.735055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.739814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.740004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.740025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.745119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.745174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.745195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.750161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.750215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.750236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.174 [2024-12-09 06:10:59.755262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.174 [2024-12-09 06:10:59.755317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.174 [2024-12-09 06:10:59.755338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.760338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.760399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.760420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.765444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.765501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.765522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.770381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.770435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.770457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.775453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.775508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.775530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.780467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.780521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.780542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.785553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.785741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.785761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.790658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.790718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.790739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.795646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.795702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.795723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.800628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.800816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.800837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.805819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.805898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.805919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.810905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.810973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.810994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.816093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.816302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.816323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.821302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.821375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.821404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.826468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.826536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.826557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.831454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.831628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.831649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.836566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.836649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.836670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.434 [2024-12-09 06:10:59.841544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.434 [2024-12-09 06:10:59.841603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.434 [2024-12-09 06:10:59.841624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.846580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.846758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.846778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.851864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.852062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.852236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.857035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.857274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.857495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.862302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.862490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.862636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.867452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.867649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.867850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.872695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.872909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.873089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.877899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.878086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.878326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.883107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.883296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.883450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.888006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.888231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.888387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.893223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.893440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.893596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.898389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.898601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.898646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.903579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.903635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.903656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.908558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.908654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.908675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.913700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.913885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.913906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.918847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.918915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.918936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.923959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.924016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.924038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.928999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.929061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.929082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.934150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.934208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.934229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.939251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.939347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.939368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.944226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.944296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.944317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.949144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.949208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.949229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.954198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.954257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.954278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.959254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.959328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.959349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.964252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.964328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.964349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.969291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.969379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.969416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.974419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.974486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.974508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.435 [2024-12-09 06:10:59.979426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.435 [2024-12-09 06:10:59.979619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.435 [2024-12-09 06:10:59.979640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.436 [2024-12-09 06:10:59.984664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.436 [2024-12-09 06:10:59.984720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.436 [2024-12-09 06:10:59.984741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.436 [2024-12-09 06:10:59.989708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.436 [2024-12-09 06:10:59.989766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.436 [2024-12-09 06:10:59.989787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.436 [2024-12-09 06:10:59.994693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.436 [2024-12-09 06:10:59.994877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.436 [2024-12-09 06:10:59.994897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.436 [2024-12-09 06:11:00.000008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.436 [2024-12-09 06:11:00.000077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.436 [2024-12-09 06:11:00.000112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.436 [2024-12-09 06:11:00.005419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.436 [2024-12-09 06:11:00.005498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.436 [2024-12-09 06:11:00.005521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.436 [2024-12-09 06:11:00.010708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.436 [2024-12-09 06:11:00.010902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.436 [2024-12-09 06:11:00.010924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.436 [2024-12-09 06:11:00.016132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.436 [2024-12-09 06:11:00.016224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.436 [2024-12-09 06:11:00.016246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.693 [2024-12-09 06:11:00.021138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.693 [2024-12-09 06:11:00.021300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.693 [2024-12-09 06:11:00.021321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.693 [2024-12-09 06:11:00.026203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.693 [2024-12-09 06:11:00.026320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.693 [2024-12-09 06:11:00.026342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.693 [2024-12-09 06:11:00.031328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.693 [2024-12-09 06:11:00.031441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.693 [2024-12-09 06:11:00.031462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.693 [2024-12-09 06:11:00.036457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.693 [2024-12-09 06:11:00.036554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.693 [2024-12-09 06:11:00.036577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.693 [2024-12-09 06:11:00.041624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.694 [2024-12-09 06:11:00.041838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.694 [2024-12-09 06:11:00.041860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.694 [2024-12-09 06:11:00.046950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.694 [2024-12-09 06:11:00.047013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.694 [2024-12-09 06:11:00.047034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:05.694 [2024-12-09 06:11:00.052083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.694 [2024-12-09 06:11:00.052259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.694 [2024-12-09 06:11:00.052280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:12:05.694 [2024-12-09 06:11:00.057127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.694 [2024-12-09 06:11:00.057259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.694 [2024-12-09 06:11:00.057280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:12:05.694 6010.50 IOPS, 751.31 MiB/s [2024-12-09T06:11:00.281Z] [2024-12-09 06:11:00.063567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1864d10) with pdu=0x200016eff3c8 01:12:05.694 [2024-12-09 06:11:00.063794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:12:05.694 [2024-12-09 06:11:00.063815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:12:05.694 01:12:05.694 Latency(us) 01:12:05.694 [2024-12-09T06:11:00.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:12:05.694 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:12:05.694 nvme0n1 : 2.00 6007.85 750.98 0.00 0.00 2658.84 1789.74 9159.25 01:12:05.694 [2024-12-09T06:11:00.281Z] =================================================================================================================== 01:12:05.694 [2024-12-09T06:11:00.281Z] Total : 6007.85 750.98 0.00 0.00 2658.84 1789.74 9159.25 01:12:05.694 { 01:12:05.694 "results": [ 01:12:05.694 { 01:12:05.694 "job": "nvme0n1", 01:12:05.694 "core_mask": "0x2", 01:12:05.694 "workload": "randwrite", 01:12:05.694 "status": "finished", 01:12:05.694 "queue_depth": 16, 01:12:05.694 "io_size": 131072, 01:12:05.694 "runtime": 2.00421, 01:12:05.694 "iops": 6007.853468448915, 01:12:05.694 "mibps": 750.9816835561144, 01:12:05.694 "io_failed": 0, 01:12:05.694 "io_timeout": 0, 01:12:05.694 "avg_latency_us": 2658.841660204476, 01:12:05.694 "min_latency_us": 1789.7381526104418, 01:12:05.694 "max_latency_us": 9159.248192771085 01:12:05.694 } 01:12:05.694 ], 01:12:05.694 "core_count": 1 01:12:05.694 } 01:12:05.694 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:12:05.694 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:12:05.694 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:12:05.694 | .driver_specific 01:12:05.694 | .nvme_error 01:12:05.694 | .status_code 01:12:05.694 | .command_transient_transport_error' 01:12:05.694 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 389 > 0 )) 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79995 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79995 ']' 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79995 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79995 01:12:05.952 killing process with pid 79995 01:12:05.952 Received shutdown signal, test time was about 2.000000 seconds 01:12:05.952 01:12:05.952 Latency(us) 01:12:05.952 [2024-12-09T06:11:00.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:12:05.952 [2024-12-09T06:11:00.539Z] =================================================================================================================== 01:12:05.952 [2024-12-09T06:11:00.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79995' 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79995 01:12:05.952 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79995 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79792 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79792 ']' 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79792 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79792 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79792' 01:12:06.212 killing process with pid 79792 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79792 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79792 01:12:06.212 ************************************ 01:12:06.212 END TEST nvmf_digest_error 01:12:06.212 ************************************ 01:12:06.212 01:12:06.212 real 0m17.337s 01:12:06.212 user 0m31.142s 01:12:06.212 sys 0m6.111s 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:06.212 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:12:06.471 rmmod nvme_tcp 01:12:06.471 rmmod nvme_fabrics 01:12:06.471 rmmod nvme_keyring 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79792 ']' 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79792 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 79792 ']' 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 79792 01:12:06.471 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79792) - No such process 01:12:06.471 Process with pid 79792 is not found 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 79792 is not found' 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:12:06.471 06:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 01:12:06.471 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:12:06.471 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:12:06.471 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:12:06.471 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:12:06.471 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:12:06.730 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 01:12:06.989 01:12:06.989 real 0m36.622s 01:12:06.989 user 1m3.570s 01:12:06.989 sys 0m13.021s 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:12:06.989 ************************************ 01:12:06.989 END TEST nvmf_digest 01:12:06.989 ************************************ 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:12:06.989 ************************************ 01:12:06.989 START TEST nvmf_host_multipath 01:12:06.989 ************************************ 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:12:06.989 * Looking for test storage... 01:12:06.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 01:12:06.989 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:12:07.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:07.248 --rc genhtml_branch_coverage=1 01:12:07.248 --rc genhtml_function_coverage=1 01:12:07.248 --rc genhtml_legend=1 01:12:07.248 --rc geninfo_all_blocks=1 01:12:07.248 --rc geninfo_unexecuted_blocks=1 01:12:07.248 01:12:07.248 ' 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:12:07.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:07.248 --rc genhtml_branch_coverage=1 01:12:07.248 --rc genhtml_function_coverage=1 01:12:07.248 --rc genhtml_legend=1 01:12:07.248 --rc geninfo_all_blocks=1 01:12:07.248 --rc geninfo_unexecuted_blocks=1 01:12:07.248 01:12:07.248 ' 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:12:07.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:07.248 --rc genhtml_branch_coverage=1 01:12:07.248 --rc genhtml_function_coverage=1 01:12:07.248 --rc genhtml_legend=1 01:12:07.248 --rc geninfo_all_blocks=1 01:12:07.248 --rc geninfo_unexecuted_blocks=1 01:12:07.248 01:12:07.248 ' 01:12:07.248 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:12:07.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:07.249 --rc genhtml_branch_coverage=1 01:12:07.249 --rc genhtml_function_coverage=1 01:12:07.249 --rc genhtml_legend=1 01:12:07.249 --rc geninfo_all_blocks=1 01:12:07.249 --rc geninfo_unexecuted_blocks=1 01:12:07.249 01:12:07.249 ' 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:12:07.249 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:12:07.249 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:12:07.250 Cannot find device "nvmf_init_br" 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:12:07.250 Cannot find device "nvmf_init_br2" 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:12:07.250 Cannot find device "nvmf_tgt_br" 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:12:07.250 Cannot find device "nvmf_tgt_br2" 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:12:07.250 Cannot find device "nvmf_init_br" 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:12:07.250 Cannot find device "nvmf_init_br2" 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:12:07.250 Cannot find device "nvmf_tgt_br" 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:12:07.250 Cannot find device "nvmf_tgt_br2" 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 01:12:07.250 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:12:07.507 Cannot find device "nvmf_br" 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:12:07.507 Cannot find device "nvmf_init_if" 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:12:07.507 Cannot find device "nvmf_init_if2" 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:07.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:07.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:12:07.507 06:11:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:12:07.507 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:12:07.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:12:07.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 01:12:07.766 01:12:07.766 --- 10.0.0.3 ping statistics --- 01:12:07.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:07.766 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:12:07.766 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:12:07.766 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 01:12:07.766 01:12:07.766 --- 10.0.0.4 ping statistics --- 01:12:07.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:07.766 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:12:07.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:12:07.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 01:12:07.766 01:12:07.766 --- 10.0.0.1 ping statistics --- 01:12:07.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:07.766 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:12:07.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:12:07.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 01:12:07.766 01:12:07.766 --- 10.0.0.2 ping statistics --- 01:12:07.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:07.766 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80311 01:12:07.766 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:12:07.767 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80311 01:12:07.767 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80311 ']' 01:12:07.767 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:07.767 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:07.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:07.767 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:07.767 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:07.767 06:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:12:07.767 [2024-12-09 06:11:02.313219] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:12:07.767 [2024-12-09 06:11:02.313300] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:12:08.024 [2024-12-09 06:11:02.465139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:12:08.024 [2024-12-09 06:11:02.503399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:12:08.024 [2024-12-09 06:11:02.503443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:12:08.024 [2024-12-09 06:11:02.503452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:12:08.024 [2024-12-09 06:11:02.503459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:12:08.024 [2024-12-09 06:11:02.503466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:12:08.024 [2024-12-09 06:11:02.504514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:08.024 [2024-12-09 06:11:02.504551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:12:08.024 [2024-12-09 06:11:02.545982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:12:08.589 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:08.589 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 01:12:08.589 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:12:08.589 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:12:08.589 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:12:08.846 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:12:08.846 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80311 01:12:08.846 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:12:08.846 [2024-12-09 06:11:03.395409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:12:08.846 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:12:09.104 Malloc0 01:12:09.104 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:12:09.361 06:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:12:09.621 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:12:09.621 [2024-12-09 06:11:04.191514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:12:09.880 [2024-12-09 06:11:04.371787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80361 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80361 /var/tmp/bdevperf.sock 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80361 ']' 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:12:09.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:09.880 06:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:12:10.816 06:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:10.816 06:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 01:12:10.816 06:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:12:11.073 06:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:12:11.331 Nvme0n1 01:12:11.331 06:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:12:11.589 Nvme0n1 01:12:11.589 06:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:12:11.589 06:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 01:12:12.524 06:11:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 01:12:12.524 06:11:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:12:12.782 06:11:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:12:13.040 06:11:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 01:12:13.041 06:11:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80406 01:12:13.041 06:11:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80311 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:12:13.041 06:11:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:19.636 Attaching 4 probes... 01:12:19.636 @path[10.0.0.3, 4421]: 20594 01:12:19.636 @path[10.0.0.3, 4421]: 21112 01:12:19.636 @path[10.0.0.3, 4421]: 21097 01:12:19.636 @path[10.0.0.3, 4421]: 21268 01:12:19.636 @path[10.0.0.3, 4421]: 21154 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80406 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:19.636 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 01:12:19.637 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:12:19.637 06:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:12:19.637 06:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 01:12:19.637 06:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80523 01:12:19.637 06:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80311 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:12:19.637 06:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:26.201 Attaching 4 probes... 01:12:26.201 @path[10.0.0.3, 4420]: 19961 01:12:26.201 @path[10.0.0.3, 4420]: 20384 01:12:26.201 @path[10.0.0.3, 4420]: 20385 01:12:26.201 @path[10.0.0.3, 4420]: 20475 01:12:26.201 @path[10.0.0.3, 4420]: 20320 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80523 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80632 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:12:26.201 06:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80311 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:12:32.826 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:12:32.826 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:12:32.826 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:32.827 Attaching 4 probes... 01:12:32.827 @path[10.0.0.3, 4421]: 15337 01:12:32.827 @path[10.0.0.3, 4421]: 21096 01:12:32.827 @path[10.0.0.3, 4421]: 21172 01:12:32.827 @path[10.0.0.3, 4421]: 21146 01:12:32.827 @path[10.0.0.3, 4421]: 21230 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80632 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 01:12:32.827 06:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:12:32.827 06:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:12:32.827 06:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 01:12:32.827 06:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80744 01:12:32.827 06:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:12:32.827 06:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80311 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:12:39.392 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:12:39.392 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 01:12:39.392 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:39.393 Attaching 4 probes... 01:12:39.393 01:12:39.393 01:12:39.393 01:12:39.393 01:12:39.393 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80744 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:12:39.393 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:12:39.652 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 01:12:39.652 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80862 01:12:39.652 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:12:39.652 06:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80311 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:12:46.218 06:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:12:46.218 06:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:46.218 Attaching 4 probes... 01:12:46.218 @path[10.0.0.3, 4421]: 20384 01:12:46.218 @path[10.0.0.3, 4421]: 20960 01:12:46.218 @path[10.0.0.3, 4421]: 20924 01:12:46.218 @path[10.0.0.3, 4421]: 20880 01:12:46.218 @path[10.0.0.3, 4421]: 20925 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80862 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:12:46.218 [2024-12-09 06:11:40.384392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc98d0 is same with the state(6) to be set 01:12:46.218 [2024-12-09 06:11:40.384613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc98d0 is same with the state(6) to be set 01:12:46.218 [2024-12-09 06:11:40.384628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc98d0 is same with the state(6) to be set 01:12:46.218 06:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 01:12:47.156 06:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 01:12:47.156 06:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80985 01:12:47.156 06:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:12:47.156 06:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80311 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:53.726 Attaching 4 probes... 01:12:53.726 @path[10.0.0.3, 4420]: 18655 01:12:53.726 @path[10.0.0.3, 4420]: 18944 01:12:53.726 @path[10.0.0.3, 4420]: 18978 01:12:53.726 @path[10.0.0.3, 4420]: 18972 01:12:53.726 @path[10.0.0.3, 4420]: 18969 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80985 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:12:53.726 [2024-12-09 06:11:47.828808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:12:53.726 06:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:12:53.726 06:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 01:13:00.305 06:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 01:13:00.305 06:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81159 01:13:00.305 06:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:13:00.305 06:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80311 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:13:05.578 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:13:05.578 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:13:05.836 Attaching 4 probes... 01:13:05.836 @path[10.0.0.3, 4421]: 20716 01:13:05.836 @path[10.0.0.3, 4421]: 21087 01:13:05.836 @path[10.0.0.3, 4421]: 21019 01:13:05.836 @path[10.0.0.3, 4421]: 21072 01:13:05.836 @path[10.0.0.3, 4421]: 21053 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81159 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80361 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80361 ']' 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80361 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80361 01:13:05.836 killing process with pid 80361 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80361' 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80361 01:13:05.836 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80361 01:13:05.836 { 01:13:05.836 "results": [ 01:13:05.836 { 01:13:05.836 "job": "Nvme0n1", 01:13:05.836 "core_mask": "0x4", 01:13:05.836 "workload": "verify", 01:13:05.836 "status": "terminated", 01:13:05.836 "verify_range": { 01:13:05.836 "start": 0, 01:13:05.836 "length": 16384 01:13:05.836 }, 01:13:05.836 "queue_depth": 128, 01:13:05.836 "io_size": 4096, 01:13:05.836 "runtime": 54.310994, 01:13:05.836 "iops": 8640.69989218021, 01:13:05.836 "mibps": 33.752733953828944, 01:13:05.836 "io_failed": 0, 01:13:05.836 "io_timeout": 0, 01:13:05.836 "avg_latency_us": 14799.642328636919, 01:13:05.836 "min_latency_us": 506.6538152610442, 01:13:05.836 "max_latency_us": 7061253.963052209 01:13:05.836 } 01:13:05.836 ], 01:13:05.836 "core_count": 1 01:13:05.836 } 01:13:06.101 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80361 01:13:06.101 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:13:06.101 [2024-12-09 06:11:04.437980] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:13:06.101 [2024-12-09 06:11:04.438057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80361 ] 01:13:06.101 [2024-12-09 06:11:04.588558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:13:06.101 [2024-12-09 06:11:04.628897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:13:06.101 [2024-12-09 06:11:04.669982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:13:06.101 Running I/O for 90 seconds... 01:13:06.101 8305.00 IOPS, 32.44 MiB/s [2024-12-09T06:12:00.688Z] 8672.50 IOPS, 33.88 MiB/s [2024-12-09T06:12:00.688Z] 9289.00 IOPS, 36.29 MiB/s [2024-12-09T06:12:00.688Z] 9600.75 IOPS, 37.50 MiB/s [2024-12-09T06:12:00.688Z] 9792.20 IOPS, 38.25 MiB/s [2024-12-09T06:12:00.688Z] 9930.83 IOPS, 38.79 MiB/s [2024-12-09T06:12:00.688Z] 10021.00 IOPS, 39.14 MiB/s [2024-12-09T06:12:00.688Z] [2024-12-09 06:11:14.057714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.101 [2024-12-09 06:11:14.057773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:13:06.101 [2024-12-09 06:11:14.057817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.101 [2024-12-09 06:11:14.057832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:13:06.101 [2024-12-09 06:11:14.057852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.101 [2024-12-09 06:11:14.057866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:13:06.101 [2024-12-09 06:11:14.057884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.101 [2024-12-09 06:11:14.057898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:13:06.101 [2024-12-09 06:11:14.057916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.101 [2024-12-09 06:11:14.057930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:13:06.101 [2024-12-09 06:11:14.057948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.101 [2024-12-09 06:11:14.057961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:13:06.101 [2024-12-09 06:11:14.057979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.101 [2024-12-09 06:11:14.057992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:13:06.101 [2024-12-09 06:11:14.058011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.101 [2024-12-09 06:11:14.058024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.058957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.058976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.058989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.059898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.059980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.059993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.060012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.060025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.060043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.060056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.060074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.060100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.060119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.060137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.060156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.102 [2024-12-09 06:11:14.060169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.060187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.060200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.060218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.060232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.060250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.102 [2024-12-09 06:11:14.060263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:13:06.102 [2024-12-09 06:11:14.060281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.060294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.060331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.060362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.060394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.060425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.060979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.060993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.061024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.061056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.061096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.061129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.061160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.061192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.061223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.061254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.061285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.061317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.061366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.061401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.061433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.061451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.061464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.062645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.103 [2024-12-09 06:11:14.062675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.062698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.062711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.062730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.062744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.062762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.062776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.062795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.062808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.062826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.062840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.062859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.062872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.062891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.062904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.063135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.063162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.063184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.063197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.063216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.063229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.063248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.063262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.063281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.063294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.063316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.063330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.063348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.063362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.063381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.063395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:14.063417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:14.063430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:13:06.103 10080.50 IOPS, 39.38 MiB/s [2024-12-09T06:12:00.690Z] 10068.44 IOPS, 39.33 MiB/s [2024-12-09T06:12:00.690Z] 10080.20 IOPS, 39.38 MiB/s [2024-12-09T06:12:00.690Z] 10089.09 IOPS, 39.41 MiB/s [2024-12-09T06:12:00.690Z] 10098.33 IOPS, 39.45 MiB/s [2024-12-09T06:12:00.690Z] 10099.38 IOPS, 39.45 MiB/s [2024-12-09T06:12:00.690Z] 10100.29 IOPS, 39.45 MiB/s [2024-12-09T06:12:00.690Z] [2024-12-09 06:11:20.486213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.103 [2024-12-09 06:11:20.486697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:13:06.103 [2024-12-09 06:11:20.486715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.486727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.486745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.486757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.486781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.486794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.486812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.486824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.486844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.486856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.486874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.486887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.486904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.486917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.486935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.486947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.486964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.486977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.486994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.487090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.487131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.487161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.487199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.487230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.487261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.487291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.487321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.487971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.487989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.488105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.488136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.488166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.488196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.488227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.488257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.488287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.104 [2024-12-09 06:11:20.488318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:13:06.104 [2024-12-09 06:11:20.488582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.104 [2024-12-09 06:11:20.488595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.488612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.488625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.488642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.488655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.488672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.488685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.488719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.488732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.488750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.488763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.488782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.488794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.488812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.488825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.489828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.489866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.489901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.489935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.489970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.489991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.105 [2024-12-09 06:11:20.490408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:20.490813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:20.490826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:13:06.105 9677.07 IOPS, 37.80 MiB/s [2024-12-09T06:12:00.692Z] 9482.31 IOPS, 37.04 MiB/s [2024-12-09T06:12:00.692Z] 9543.71 IOPS, 37.28 MiB/s [2024-12-09T06:12:00.692Z] 9601.17 IOPS, 37.50 MiB/s [2024-12-09T06:12:00.692Z] 9652.05 IOPS, 37.70 MiB/s [2024-12-09T06:12:00.692Z] 9698.65 IOPS, 37.89 MiB/s [2024-12-09T06:12:00.692Z] 9738.90 IOPS, 38.04 MiB/s [2024-12-09T06:12:00.692Z] [2024-12-09 06:11:27.309072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:13:06.105 [2024-12-09 06:11:27.309528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.105 [2024-12-09 06:11:27.309541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.309571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.309601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.309630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.309660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.309690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.309723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.309752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.309782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.309818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.309849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.309879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.309909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.309940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.309970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.309988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.310761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.310978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.310996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.311009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.311039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.311069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.311110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.311140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.311171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.311201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.311231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.106 [2024-12-09 06:11:27.311262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.311293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.311323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.311358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.311389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.311419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.311449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:13:06.106 [2024-12-09 06:11:27.311467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.106 [2024-12-09 06:11:27.311479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.311510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.311540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.311571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.311602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.311632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.311662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.311692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.311722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.311757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.311792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.311823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.311853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.311883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.311914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.311964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.311982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.311995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.312435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.312466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.312496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.312530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.312561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.312592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.312622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.312640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.312653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.107 [2024-12-09 06:11:27.313286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:27.313885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:27.313898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:13:06.107 9392.95 IOPS, 36.69 MiB/s [2024-12-09T06:12:00.694Z] 8984.57 IOPS, 35.10 MiB/s [2024-12-09T06:12:00.694Z] 8610.21 IOPS, 33.63 MiB/s [2024-12-09T06:12:00.694Z] 8265.80 IOPS, 32.29 MiB/s [2024-12-09T06:12:00.694Z] 7947.88 IOPS, 31.05 MiB/s [2024-12-09T06:12:00.694Z] 7653.52 IOPS, 29.90 MiB/s [2024-12-09T06:12:00.694Z] 7380.18 IOPS, 28.83 MiB/s [2024-12-09T06:12:00.694Z] 7400.76 IOPS, 28.91 MiB/s [2024-12-09T06:12:00.694Z] 7503.07 IOPS, 29.31 MiB/s [2024-12-09T06:12:00.694Z] 7597.71 IOPS, 29.68 MiB/s [2024-12-09T06:12:00.694Z] 7686.94 IOPS, 30.03 MiB/s [2024-12-09T06:12:00.694Z] 7769.88 IOPS, 30.35 MiB/s [2024-12-09T06:12:00.694Z] 7847.65 IOPS, 30.65 MiB/s [2024-12-09T06:12:00.694Z] [2024-12-09 06:11:40.384449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:40.384505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:40.384551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:40.384599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:40.384620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:40.384634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:40.384654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:40.384667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:40.384685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:40.384698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:40.384716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:40.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:40.384746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:40.384759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:40.384777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:40.384790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:40.384808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.107 [2024-12-09 06:11:40.384821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:13:06.107 [2024-12-09 06:11:40.384839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.384852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.384870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.384883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.384901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.384913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.384931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.384944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.384962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.384980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.384999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.385716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.385975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.385988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.108 [2024-12-09 06:11:40.386591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.108 [2024-12-09 06:11:40.386849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.108 [2024-12-09 06:11:40.386862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.386876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.386889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.386903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.386915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.386929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.386942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.386956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.386968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.386982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.386995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.109 [2024-12-09 06:11:40.387058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.109 [2024-12-09 06:11:40.387084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.109 [2024-12-09 06:11:40.387117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.109 [2024-12-09 06:11:40.387143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.109 [2024-12-09 06:11:40.387168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.109 [2024-12-09 06:11:40.387201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.109 [2024-12-09 06:11:40.387227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:06.109 [2024-12-09 06:11:40.387253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.109 [2024-12-09 06:11:40.387651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138aa0 is same with the state(6) to be set 01:13:06.109 [2024-12-09 06:11:40.387679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.387688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.387698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62888 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.387733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.387742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63280 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.387755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.387776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.387785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63288 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.387797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.387818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.387827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63296 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.387840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.387865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.387874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63304 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.387886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.387907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.387916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63312 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.387928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.387949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.387959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63320 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.387970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.387983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.387992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63328 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63336 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63344 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63352 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63360 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63368 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63376 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63384 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63392 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63400 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63408 len:8 PRP1 0x0 PRP2 0x0 01:13:06.109 [2024-12-09 06:11:40.388485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.109 [2024-12-09 06:11:40.388498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.109 [2024-12-09 06:11:40.388507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.109 [2024-12-09 06:11:40.388516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63416 len:8 PRP1 0x0 PRP2 0x0 01:13:06.110 [2024-12-09 06:11:40.388529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.110 [2024-12-09 06:11:40.388542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.110 [2024-12-09 06:11:40.388551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.110 [2024-12-09 06:11:40.388564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63424 len:8 PRP1 0x0 PRP2 0x0 01:13:06.110 [2024-12-09 06:11:40.388577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.110 [2024-12-09 06:11:40.388589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:06.110 [2024-12-09 06:11:40.388599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:06.110 [2024-12-09 06:11:40.388608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63432 len:8 PRP1 0x0 PRP2 0x0 01:13:06.110 [2024-12-09 06:11:40.388621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.110 [2024-12-09 06:11:40.389542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:13:06.110 [2024-12-09 06:11:40.389607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:06.110 [2024-12-09 06:11:40.389623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:06.110 [2024-12-09 06:11:40.389649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c71e0 (9): Bad file descriptor 01:13:06.110 [2024-12-09 06:11:40.390007] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:06.110 [2024-12-09 06:11:40.390033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c71e0 with addr=10.0.0.3, port=4421 01:13:06.110 [2024-12-09 06:11:40.390048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c71e0 is same with the state(6) to be set 01:13:06.110 [2024-12-09 06:11:40.390102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c71e0 (9): Bad file descriptor 01:13:06.110 [2024-12-09 06:11:40.390128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:13:06.110 [2024-12-09 06:11:40.390142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:13:06.110 [2024-12-09 06:11:40.390156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:13:06.110 [2024-12-09 06:11:40.390168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:13:06.110 [2024-12-09 06:11:40.390181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:13:06.110 7902.20 IOPS, 30.87 MiB/s [2024-12-09T06:12:00.697Z] 7939.47 IOPS, 31.01 MiB/s [2024-12-09T06:12:00.697Z] 7980.89 IOPS, 31.18 MiB/s [2024-12-09T06:12:00.697Z] 8019.61 IOPS, 31.33 MiB/s [2024-12-09T06:12:00.697Z] 8057.36 IOPS, 31.47 MiB/s [2024-12-09T06:12:00.697Z] 8092.73 IOPS, 31.61 MiB/s [2024-12-09T06:12:00.697Z] 8125.88 IOPS, 31.74 MiB/s [2024-12-09T06:12:00.697Z] 8155.71 IOPS, 31.86 MiB/s [2024-12-09T06:12:00.697Z] 8187.00 IOPS, 31.98 MiB/s [2024-12-09T06:12:00.697Z] 8216.57 IOPS, 32.10 MiB/s [2024-12-09T06:12:00.697Z] [2024-12-09 06:11:50.425601] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:13:06.110 8259.33 IOPS, 32.26 MiB/s [2024-12-09T06:12:00.697Z] 8309.52 IOPS, 32.46 MiB/s [2024-12-09T06:12:00.697Z] 8357.91 IOPS, 32.65 MiB/s [2024-12-09T06:12:00.697Z] 8404.12 IOPS, 32.83 MiB/s [2024-12-09T06:12:00.697Z] 8443.06 IOPS, 32.98 MiB/s [2024-12-09T06:12:00.697Z] 8484.44 IOPS, 33.14 MiB/s [2024-12-09T06:12:00.697Z] 8524.67 IOPS, 33.30 MiB/s [2024-12-09T06:12:00.697Z] 8562.58 IOPS, 33.45 MiB/s [2024-12-09T06:12:00.697Z] 8599.06 IOPS, 33.59 MiB/s [2024-12-09T06:12:00.697Z] 8633.74 IOPS, 33.73 MiB/s [2024-12-09T06:12:00.697Z] Received shutdown signal, test time was about 54.311625 seconds 01:13:06.110 01:13:06.110 Latency(us) 01:13:06.110 [2024-12-09T06:12:00.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:06.110 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:13:06.110 Verification LBA range: start 0x0 length 0x4000 01:13:06.110 Nvme0n1 : 54.31 8640.70 33.75 0.00 0.00 14799.64 506.65 7061253.96 01:13:06.110 [2024-12-09T06:12:00.697Z] =================================================================================================================== 01:13:06.110 [2024-12-09T06:12:00.697Z] Total : 8640.70 33.75 0.00 0.00 14799.64 506.65 7061253.96 01:13:06.110 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:13:06.369 rmmod nvme_tcp 01:13:06.369 rmmod nvme_fabrics 01:13:06.369 rmmod nvme_keyring 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:13:06.369 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80311 ']' 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80311 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80311 ']' 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80311 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80311 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:13:06.627 killing process with pid 80311 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80311' 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80311 01:13:06.627 06:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80311 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:13:06.627 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:13:06.885 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:13:06.886 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 01:13:07.145 01:13:07.145 real 1m0.093s 01:13:07.145 user 2m40.833s 01:13:07.145 sys 0m22.257s 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:13:07.145 ************************************ 01:13:07.145 END TEST nvmf_host_multipath 01:13:07.145 ************************************ 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:13:07.145 ************************************ 01:13:07.145 START TEST nvmf_timeout 01:13:07.145 ************************************ 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:13:07.145 * Looking for test storage... 01:13:07.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 01:13:07.145 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 01:13:07.404 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:13:07.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:07.405 --rc genhtml_branch_coverage=1 01:13:07.405 --rc genhtml_function_coverage=1 01:13:07.405 --rc genhtml_legend=1 01:13:07.405 --rc geninfo_all_blocks=1 01:13:07.405 --rc geninfo_unexecuted_blocks=1 01:13:07.405 01:13:07.405 ' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:13:07.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:07.405 --rc genhtml_branch_coverage=1 01:13:07.405 --rc genhtml_function_coverage=1 01:13:07.405 --rc genhtml_legend=1 01:13:07.405 --rc geninfo_all_blocks=1 01:13:07.405 --rc geninfo_unexecuted_blocks=1 01:13:07.405 01:13:07.405 ' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:13:07.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:07.405 --rc genhtml_branch_coverage=1 01:13:07.405 --rc genhtml_function_coverage=1 01:13:07.405 --rc genhtml_legend=1 01:13:07.405 --rc geninfo_all_blocks=1 01:13:07.405 --rc geninfo_unexecuted_blocks=1 01:13:07.405 01:13:07.405 ' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:13:07.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:07.405 --rc genhtml_branch_coverage=1 01:13:07.405 --rc genhtml_function_coverage=1 01:13:07.405 --rc genhtml_legend=1 01:13:07.405 --rc geninfo_all_blocks=1 01:13:07.405 --rc geninfo_unexecuted_blocks=1 01:13:07.405 01:13:07.405 ' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:13:07.405 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 01:13:07.405 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:13:07.406 Cannot find device "nvmf_init_br" 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:13:07.406 Cannot find device "nvmf_init_br2" 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:13:07.406 Cannot find device "nvmf_tgt_br" 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:13:07.406 Cannot find device "nvmf_tgt_br2" 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:13:07.406 Cannot find device "nvmf_init_br" 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:13:07.406 Cannot find device "nvmf_init_br2" 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:13:07.406 Cannot find device "nvmf_tgt_br" 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 01:13:07.406 06:12:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:13:07.664 Cannot find device "nvmf_tgt_br2" 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:13:07.664 Cannot find device "nvmf_br" 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:13:07.664 Cannot find device "nvmf_init_if" 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:13:07.664 Cannot find device "nvmf_init_if2" 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:13:07.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:13:07.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:13:07.664 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:13:07.665 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:13:07.665 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:13:07.665 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:13:07.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:13:07.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 01:13:07.923 01:13:07.923 --- 10.0.0.3 ping statistics --- 01:13:07.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:07.923 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:13:07.923 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:13:07.923 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 01:13:07.923 01:13:07.923 --- 10.0.0.4 ping statistics --- 01:13:07.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:07.923 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:13:07.923 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:13:07.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:13:07.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 01:13:07.923 01:13:07.923 --- 10.0.0.1 ping statistics --- 01:13:07.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:07.923 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:13:07.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:13:07.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 01:13:07.924 01:13:07.924 --- 10.0.0.2 ping statistics --- 01:13:07.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:07.924 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81523 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81523 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81523 ']' 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:13:07.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:13:07.924 06:12:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:07.924 [2024-12-09 06:12:02.482653] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:13:07.924 [2024-12-09 06:12:02.482738] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:13:08.200 [2024-12-09 06:12:02.619822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:13:08.200 [2024-12-09 06:12:02.662302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:13:08.200 [2024-12-09 06:12:02.662347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:13:08.200 [2024-12-09 06:12:02.662357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:13:08.200 [2024-12-09 06:12:02.662365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:13:08.200 [2024-12-09 06:12:02.662373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:13:08.200 [2024-12-09 06:12:02.663176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:13:08.200 [2024-12-09 06:12:02.663179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:13:08.200 [2024-12-09 06:12:02.705012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:13:09.137 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:13:09.137 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:13:09.137 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:13:09.137 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:09.137 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:09.137 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:13:09.137 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:13:09.137 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:13:09.137 [2024-12-09 06:12:03.598278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:13:09.137 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:13:09.396 Malloc0 01:13:09.396 06:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:13:09.655 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:09.914 [2024-12-09 06:12:04.409751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81572 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81572 /var/tmp/bdevperf.sock 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81572 ']' 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:13:09.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:13:09.914 06:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:09.914 [2024-12-09 06:12:04.479267] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:13:09.914 [2024-12-09 06:12:04.479333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81572 ] 01:13:10.172 [2024-12-09 06:12:04.619671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:13:10.172 [2024-12-09 06:12:04.662777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:13:10.172 [2024-12-09 06:12:04.704955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:13:10.745 06:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:13:10.745 06:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:13:10.745 06:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:13:11.016 06:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:13:11.275 NVMe0n1 01:13:11.275 06:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81596 01:13:11.275 06:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:13:11.275 06:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 01:13:11.534 Running I/O for 10 seconds... 01:13:12.481 06:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:12.481 9857.00 IOPS, 38.50 MiB/s [2024-12-09T06:12:07.068Z] [2024-12-09 06:12:06.967936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.967985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.968017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.968039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.968060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.968082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.968131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.968154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.968175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.968197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.481 [2024-12-09 06:12:06.968220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.481 [2024-12-09 06:12:06.968232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.968886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.968980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.968989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.969000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.969010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.969022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.969031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.969042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.482 [2024-12-09 06:12:06.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.969063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.482 [2024-12-09 06:12:06.969073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.482 [2024-12-09 06:12:06.969084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.483 [2024-12-09 06:12:06.969826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.483 [2024-12-09 06:12:06.969932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.483 [2024-12-09 06:12:06.969941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.969953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.484 [2024-12-09 06:12:06.969962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.969974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.484 [2024-12-09 06:12:06.969983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.969994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.484 [2024-12-09 06:12:06.970004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.484 [2024-12-09 06:12:06.970029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.484 [2024-12-09 06:12:06.970050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.484 [2024-12-09 06:12:06.970070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.484 [2024-12-09 06:12:06.970102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.484 [2024-12-09 06:12:06.970123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:12.484 [2024-12-09 06:12:06.970144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.484 [2024-12-09 06:12:06.970165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.484 [2024-12-09 06:12:06.970186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.484 [2024-12-09 06:12:06.970207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.484 [2024-12-09 06:12:06.970229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.484 [2024-12-09 06:12:06.970249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.484 [2024-12-09 06:12:06.970272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:12.484 [2024-12-09 06:12:06.970293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910690 is same with the state(6) to be set 01:13:12.484 [2024-12-09 06:12:06.970317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88256 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88856 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88864 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88872 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88880 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88888 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88896 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88904 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88912 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88920 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88928 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88936 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88944 len:8 PRP1 0x0 PRP2 0x0 01:13:12.484 [2024-12-09 06:12:06.970763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.484 [2024-12-09 06:12:06.970772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.484 [2024-12-09 06:12:06.970780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.484 [2024-12-09 06:12:06.970789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88952 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.970798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.970807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.485 [2024-12-09 06:12:06.970815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.485 [2024-12-09 06:12:06.970822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88960 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.970831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.970843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.485 [2024-12-09 06:12:06.970850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.485 [2024-12-09 06:12:06.970859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88264 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.970868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.970879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.485 [2024-12-09 06:12:06.970886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.485 [2024-12-09 06:12:06.970894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88272 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.970903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.970913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.485 [2024-12-09 06:12:06.970921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.485 [2024-12-09 06:12:06.970930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88280 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.970939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.970948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.485 [2024-12-09 06:12:06.970957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.485 [2024-12-09 06:12:06.970965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88288 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.970974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.970984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.485 [2024-12-09 06:12:06.970992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.485 [2024-12-09 06:12:06.971000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88296 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.971009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.971018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.485 [2024-12-09 06:12:06.971027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.485 [2024-12-09 06:12:06.971036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88304 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.971045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.971054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.485 06:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 01:13:12.485 [2024-12-09 06:12:06.988801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.485 [2024-12-09 06:12:06.988844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88312 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.988860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.988879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:12.485 [2024-12-09 06:12:06.988892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:12.485 [2024-12-09 06:12:06.988903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88320 len:8 PRP1 0x0 PRP2 0x0 01:13:12.485 [2024-12-09 06:12:06.988915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.989114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:13:12.485 [2024-12-09 06:12:06.989134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.989149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:13:12.485 [2024-12-09 06:12:06.989162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.989175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:13:12.485 [2024-12-09 06:12:06.989188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.989202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:13:12.485 [2024-12-09 06:12:06.989214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:12.485 [2024-12-09 06:12:06.989227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b0e50 is same with the state(6) to be set 01:13:12.485 [2024-12-09 06:12:06.989470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:13:12.485 [2024-12-09 06:12:06.989495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0e50 (9): Bad file descriptor 01:13:12.485 [2024-12-09 06:12:06.989597] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:12.485 [2024-12-09 06:12:06.989618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b0e50 with addr=10.0.0.3, port=4420 01:13:12.485 [2024-12-09 06:12:06.989632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b0e50 is same with the state(6) to be set 01:13:12.485 [2024-12-09 06:12:06.989653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0e50 (9): Bad file descriptor 01:13:12.485 [2024-12-09 06:12:06.989674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:13:12.485 [2024-12-09 06:12:06.989688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:13:12.485 [2024-12-09 06:12:06.989701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:13:12.485 [2024-12-09 06:12:06.989714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:13:12.485 [2024-12-09 06:12:06.989729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:13:14.365 5496.50 IOPS, 21.47 MiB/s [2024-12-09T06:12:09.211Z] 3664.33 IOPS, 14.31 MiB/s [2024-12-09T06:12:09.211Z] [2024-12-09 06:12:08.986621] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:14.624 [2024-12-09 06:12:08.986664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b0e50 with addr=10.0.0.3, port=4420 01:13:14.625 [2024-12-09 06:12:08.986677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b0e50 is same with the state(6) to be set 01:13:14.625 [2024-12-09 06:12:08.986697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0e50 (9): Bad file descriptor 01:13:14.625 [2024-12-09 06:12:08.986715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:13:14.625 [2024-12-09 06:12:08.986726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:13:14.625 [2024-12-09 06:12:08.986737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:13:14.625 [2024-12-09 06:12:08.986748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:13:14.625 [2024-12-09 06:12:08.986759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:13:14.625 06:12:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 01:13:14.625 06:12:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:13:14.625 06:12:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:13:14.625 06:12:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 01:13:14.625 06:12:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 01:13:14.625 06:12:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:13:14.625 06:12:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:13:14.884 06:12:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 01:13:14.884 06:12:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 01:13:16.403 2748.25 IOPS, 10.74 MiB/s [2024-12-09T06:12:10.990Z] 2198.60 IOPS, 8.59 MiB/s [2024-12-09T06:12:10.990Z] [2024-12-09 06:12:10.983667] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:16.403 [2024-12-09 06:12:10.983711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b0e50 with addr=10.0.0.3, port=4420 01:13:16.403 [2024-12-09 06:12:10.983726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b0e50 is same with the state(6) to be set 01:13:16.403 [2024-12-09 06:12:10.983748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0e50 (9): Bad file descriptor 01:13:16.403 [2024-12-09 06:12:10.983767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:13:16.403 [2024-12-09 06:12:10.983778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:13:16.403 [2024-12-09 06:12:10.983789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:13:16.403 [2024-12-09 06:12:10.983799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:13:16.403 [2024-12-09 06:12:10.983812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:13:18.345 1832.17 IOPS, 7.16 MiB/s [2024-12-09T06:12:13.191Z] 1570.43 IOPS, 6.13 MiB/s [2024-12-09T06:12:13.191Z] [2024-12-09 06:12:12.980652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:13:18.604 [2024-12-09 06:12:12.980686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:13:18.604 [2024-12-09 06:12:12.980714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:13:18.604 [2024-12-09 06:12:12.980725] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 01:13:18.604 [2024-12-09 06:12:12.980737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:13:19.542 1374.12 IOPS, 5.37 MiB/s 01:13:19.542 Latency(us) 01:13:19.542 [2024-12-09T06:12:14.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:19.542 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:13:19.542 Verification LBA range: start 0x0 length 0x4000 01:13:19.542 NVMe0n1 : 8.12 1353.46 5.29 15.76 0.00 93557.38 2487.21 7061253.96 01:13:19.542 [2024-12-09T06:12:14.129Z] =================================================================================================================== 01:13:19.542 [2024-12-09T06:12:14.129Z] Total : 1353.46 5.29 15.76 0.00 93557.38 2487.21 7061253.96 01:13:19.542 { 01:13:19.543 "results": [ 01:13:19.543 { 01:13:19.543 "job": "NVMe0n1", 01:13:19.543 "core_mask": "0x4", 01:13:19.543 "workload": "verify", 01:13:19.543 "status": "finished", 01:13:19.543 "verify_range": { 01:13:19.543 "start": 0, 01:13:19.543 "length": 16384 01:13:19.543 }, 01:13:19.543 "queue_depth": 128, 01:13:19.543 "io_size": 4096, 01:13:19.543 "runtime": 8.122173, 01:13:19.543 "iops": 1353.455534621092, 01:13:19.543 "mibps": 5.286935682113641, 01:13:19.543 "io_failed": 128, 01:13:19.543 "io_timeout": 0, 01:13:19.543 "avg_latency_us": 93557.38344699724, 01:13:19.543 "min_latency_us": 2487.209638554217, 01:13:19.543 "max_latency_us": 7061253.963052209 01:13:19.543 } 01:13:19.543 ], 01:13:19.543 "core_count": 1 01:13:19.543 } 01:13:20.111 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 01:13:20.111 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:13:20.111 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:13:20.111 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 01:13:20.111 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 01:13:20.111 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:13:20.111 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81596 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81572 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81572 ']' 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81572 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81572 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:13:20.371 killing process with pid 81572 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81572' 01:13:20.371 Received shutdown signal, test time was about 9.014611 seconds 01:13:20.371 01:13:20.371 Latency(us) 01:13:20.371 [2024-12-09T06:12:14.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:20.371 [2024-12-09T06:12:14.958Z] =================================================================================================================== 01:13:20.371 [2024-12-09T06:12:14.958Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81572 01:13:20.371 06:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81572 01:13:20.640 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:20.640 [2024-12-09 06:12:15.208995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:20.903 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:13:20.903 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81713 01:13:20.903 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81713 /var/tmp/bdevperf.sock 01:13:20.903 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81713 ']' 01:13:20.903 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:13:20.903 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:13:20.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:13:20.903 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:13:20.903 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:13:20.903 06:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:20.903 [2024-12-09 06:12:15.257660] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:13:20.903 [2024-12-09 06:12:15.257751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81713 ] 01:13:20.903 [2024-12-09 06:12:15.395824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:13:20.903 [2024-12-09 06:12:15.439658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:13:20.903 [2024-12-09 06:12:15.481955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:13:21.839 06:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:13:21.839 06:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:13:21.839 06:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:13:21.839 06:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 01:13:22.099 NVMe0n1 01:13:22.099 06:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:13:22.099 06:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81735 01:13:22.099 06:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 01:13:22.357 Running I/O for 10 seconds... 01:13:23.296 06:12:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:23.296 8433.00 IOPS, 32.94 MiB/s [2024-12-09T06:12:17.883Z] [2024-12-09 06:12:17.820507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:23.296 [2024-12-09 06:12:17.820579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.296 [2024-12-09 06:12:17.820892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.296 [2024-12-09 06:12:17.820903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.820913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.820924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.820934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.820946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.820955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.820966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.820976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.820987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.820997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.297 [2024-12-09 06:12:17.821810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.297 [2024-12-09 06:12:17.821819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.821831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.821840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.821852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.821862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.821874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.821883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.821895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.821905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.821917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.821926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.821938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.821947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.821959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.821970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.821982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.821991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.298 [2024-12-09 06:12:17.822740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.298 [2024-12-09 06:12:17.822750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.822985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.822996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:23.299 [2024-12-09 06:12:17.823313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e49690 is same with the state(6) to be set 01:13:23.299 [2024-12-09 06:12:17.823336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:23.299 [2024-12-09 06:12:17.823344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:23.299 [2024-12-09 06:12:17.823352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76776 len:8 PRP1 0x0 PRP2 0x0 01:13:23.299 [2024-12-09 06:12:17.823362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:23.299 [2024-12-09 06:12:17.823601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:13:23.299 [2024-12-09 06:12:17.823677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9e50 (9): Bad file descriptor 01:13:23.299 [2024-12-09 06:12:17.823755] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:23.299 [2024-12-09 06:12:17.823777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9e50 with addr=10.0.0.3, port=4420 01:13:23.299 [2024-12-09 06:12:17.823792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9e50 is same with the state(6) to be set 01:13:23.299 [2024-12-09 06:12:17.823808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9e50 (9): Bad file descriptor 01:13:23.299 [2024-12-09 06:12:17.823824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:13:23.299 [2024-12-09 06:12:17.823834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:13:23.299 [2024-12-09 06:12:17.823845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:13:23.299 [2024-12-09 06:12:17.823855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:13:23.299 [2024-12-09 06:12:17.823866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:13:23.299 06:12:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 01:13:24.238 4735.00 IOPS, 18.50 MiB/s [2024-12-09T06:12:18.825Z] [2024-12-09 06:12:18.822333] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:24.238 [2024-12-09 06:12:18.822376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9e50 with addr=10.0.0.3, port=4420 01:13:24.238 [2024-12-09 06:12:18.822389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9e50 is same with the state(6) to be set 01:13:24.238 [2024-12-09 06:12:18.822408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9e50 (9): Bad file descriptor 01:13:24.238 [2024-12-09 06:12:18.822425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:13:24.238 [2024-12-09 06:12:18.822435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:13:24.238 [2024-12-09 06:12:18.822446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:13:24.238 [2024-12-09 06:12:18.822456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:13:24.238 [2024-12-09 06:12:18.822467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:13:24.499 06:12:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:24.499 [2024-12-09 06:12:19.029151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:24.499 06:12:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81735 01:13:25.437 3156.67 IOPS, 12.33 MiB/s [2024-12-09T06:12:20.024Z] [2024-12-09 06:12:19.834374] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:13:27.315 2367.50 IOPS, 9.25 MiB/s [2024-12-09T06:12:22.841Z] 3275.20 IOPS, 12.79 MiB/s [2024-12-09T06:12:23.778Z] 4087.33 IOPS, 15.97 MiB/s [2024-12-09T06:12:24.714Z] 4688.71 IOPS, 18.32 MiB/s [2024-12-09T06:12:26.090Z] 5140.50 IOPS, 20.08 MiB/s [2024-12-09T06:12:27.026Z] 5477.89 IOPS, 21.40 MiB/s [2024-12-09T06:12:27.026Z] 5760.50 IOPS, 22.50 MiB/s 01:13:32.439 Latency(us) 01:13:32.439 [2024-12-09T06:12:27.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:32.439 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:13:32.439 Verification LBA range: start 0x0 length 0x4000 01:13:32.439 NVMe0n1 : 10.01 5770.04 22.54 0.00 0.00 22156.95 1710.78 3018551.31 01:13:32.439 [2024-12-09T06:12:27.026Z] =================================================================================================================== 01:13:32.439 [2024-12-09T06:12:27.026Z] Total : 5770.04 22.54 0.00 0.00 22156.95 1710.78 3018551.31 01:13:32.439 { 01:13:32.439 "results": [ 01:13:32.439 { 01:13:32.439 "job": "NVMe0n1", 01:13:32.439 "core_mask": "0x4", 01:13:32.439 "workload": "verify", 01:13:32.439 "status": "finished", 01:13:32.439 "verify_range": { 01:13:32.439 "start": 0, 01:13:32.439 "length": 16384 01:13:32.439 }, 01:13:32.439 "queue_depth": 128, 01:13:32.439 "io_size": 4096, 01:13:32.439 "runtime": 10.00565, 01:13:32.439 "iops": 5770.039927440996, 01:13:32.439 "mibps": 22.53921846656639, 01:13:32.440 "io_failed": 0, 01:13:32.440 "io_timeout": 0, 01:13:32.440 "avg_latency_us": 22156.950251013583, 01:13:32.440 "min_latency_us": 1710.7791164658634, 01:13:32.440 "max_latency_us": 3018551.3124497994 01:13:32.440 } 01:13:32.440 ], 01:13:32.440 "core_count": 1 01:13:32.440 } 01:13:32.440 06:12:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81841 01:13:32.440 06:12:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:13:32.440 06:12:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 01:13:32.440 Running I/O for 10 seconds... 01:13:33.380 06:12:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:33.380 10670.00 IOPS, 41.68 MiB/s [2024-12-09T06:12:27.967Z] [2024-12-09 06:12:27.908265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1a60 is same with the state(6) to be set 01:13:33.380 [2024-12-09 06:12:27.908318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1a60 is same with the state(6) to be set 01:13:33.380 [2024-12-09 06:12:27.908339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1a60 is same with the state(6) to be set 01:13:33.380 [2024-12-09 06:12:27.908347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1a60 is same with the state(6) to be set 01:13:33.380 [2024-12-09 06:12:27.908427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.380 [2024-12-09 06:12:27.908463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.380 [2024-12-09 06:12:27.908493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.380 [2024-12-09 06:12:27.908515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.380 [2024-12-09 06:12:27.908537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.380 [2024-12-09 06:12:27.908558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.380 [2024-12-09 06:12:27.908880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.380 [2024-12-09 06:12:27.908889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.908900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.908910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.908921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.908931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.908942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.908952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.908963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.908973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.908984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.908995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.381 [2024-12-09 06:12:27.909690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.909985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.909996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.910006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.910017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.910027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.910038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.381 [2024-12-09 06:12:27.910047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.381 [2024-12-09 06:12:27.910058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:13:33.382 [2024-12-09 06:12:27.910644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:33.382 [2024-12-09 06:12:27.910792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e481b0 is same with the state(6) to be set 01:13:33.382 [2024-12-09 06:12:27.910816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.910825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.910832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95016 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.910842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.910860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.910868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95600 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.910878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.910895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.910903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95608 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.910912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.910929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.910937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95616 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.910947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.910964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.910971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95624 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.910980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.910990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.910998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.911005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95632 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.911014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.911024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.911033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.911041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95640 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.911050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.911059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.911067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.911076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95648 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.911094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.911106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.911114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.911122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95656 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.911133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.911143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.911150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.911158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95664 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.911168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.911178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.911186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.911193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95672 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.911203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.911213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.911220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.911228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95680 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.911237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.911247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.382 [2024-12-09 06:12:27.911254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.382 [2024-12-09 06:12:27.911262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 PRP1 0x0 PRP2 0x0 01:13:33.382 [2024-12-09 06:12:27.911271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.382 [2024-12-09 06:12:27.911281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.383 [2024-12-09 06:12:27.911295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.383 [2024-12-09 06:12:27.911303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 PRP1 0x0 PRP2 0x0 01:13:33.383 [2024-12-09 06:12:27.911312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.911322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.383 [2024-12-09 06:12:27.911329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.383 [2024-12-09 06:12:27.911337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95704 len:8 PRP1 0x0 PRP2 0x0 01:13:33.383 [2024-12-09 06:12:27.911346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.911356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.383 [2024-12-09 06:12:27.911363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.383 06:12:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 01:13:33.383 [2024-12-09 06:12:27.929456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95712 len:8 PRP1 0x0 PRP2 0x0 01:13:33.383 [2024-12-09 06:12:27.929503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.929526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.383 [2024-12-09 06:12:27.929539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.383 [2024-12-09 06:12:27.929551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 01:13:33.383 [2024-12-09 06:12:27.929564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.929578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.383 [2024-12-09 06:12:27.929589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.383 [2024-12-09 06:12:27.929601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95728 len:8 PRP1 0x0 PRP2 0x0 01:13:33.383 [2024-12-09 06:12:27.929614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.929628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.383 [2024-12-09 06:12:27.929639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.383 [2024-12-09 06:12:27.929650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95736 len:8 PRP1 0x0 PRP2 0x0 01:13:33.383 [2024-12-09 06:12:27.929663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.929677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:33.383 [2024-12-09 06:12:27.929688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:33.383 [2024-12-09 06:12:27.929699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95744 len:8 PRP1 0x0 PRP2 0x0 01:13:33.383 [2024-12-09 06:12:27.929711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.929882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:13:33.383 [2024-12-09 06:12:27.929901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.929917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:13:33.383 [2024-12-09 06:12:27.929930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.929945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:13:33.383 [2024-12-09 06:12:27.929959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.929974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:13:33.383 [2024-12-09 06:12:27.929987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:33.383 [2024-12-09 06:12:27.930000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9e50 is same with the state(6) to be set 01:13:33.383 [2024-12-09 06:12:27.930261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:13:33.383 [2024-12-09 06:12:27.930297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9e50 (9): Bad file descriptor 01:13:33.383 [2024-12-09 06:12:27.930403] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:33.383 [2024-12-09 06:12:27.930424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9e50 with addr=10.0.0.3, port=4420 01:13:33.383 [2024-12-09 06:12:27.930439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9e50 is same with the state(6) to be set 01:13:33.383 [2024-12-09 06:12:27.930460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9e50 (9): Bad file descriptor 01:13:33.383 [2024-12-09 06:12:27.930482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:13:33.383 [2024-12-09 06:12:27.930495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:13:33.383 [2024-12-09 06:12:27.930510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:13:33.383 [2024-12-09 06:12:27.930525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:13:33.383 [2024-12-09 06:12:27.930540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:13:34.582 5920.50 IOPS, 23.13 MiB/s [2024-12-09T06:12:29.169Z] [2024-12-09 06:12:28.929016] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:34.582 [2024-12-09 06:12:28.929056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9e50 with addr=10.0.0.3, port=4420 01:13:34.582 [2024-12-09 06:12:28.929069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9e50 is same with the state(6) to be set 01:13:34.582 [2024-12-09 06:12:28.929102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9e50 (9): Bad file descriptor 01:13:34.582 [2024-12-09 06:12:28.929121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:13:34.582 [2024-12-09 06:12:28.929147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:13:34.582 [2024-12-09 06:12:28.929159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:13:34.582 [2024-12-09 06:12:28.929170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:13:34.582 [2024-12-09 06:12:28.929181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:13:35.521 3947.00 IOPS, 15.42 MiB/s [2024-12-09T06:12:30.108Z] [2024-12-09 06:12:29.927638] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:35.521 [2024-12-09 06:12:29.927676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9e50 with addr=10.0.0.3, port=4420 01:13:35.521 [2024-12-09 06:12:29.927705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9e50 is same with the state(6) to be set 01:13:35.521 [2024-12-09 06:12:29.927721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9e50 (9): Bad file descriptor 01:13:35.521 [2024-12-09 06:12:29.927738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:13:35.521 [2024-12-09 06:12:29.927748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:13:35.521 [2024-12-09 06:12:29.927759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:13:35.521 [2024-12-09 06:12:29.927768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:13:35.521 [2024-12-09 06:12:29.927780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:13:36.456 2960.25 IOPS, 11.56 MiB/s [2024-12-09T06:12:31.043Z] 06:12:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:36.456 [2024-12-09 06:12:30.928641] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:36.456 [2024-12-09 06:12:30.928675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9e50 with addr=10.0.0.3, port=4420 01:13:36.456 [2024-12-09 06:12:30.928687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9e50 is same with the state(6) to be set 01:13:36.456 [2024-12-09 06:12:30.928871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9e50 (9): Bad file descriptor 01:13:36.456 [2024-12-09 06:12:30.929055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:13:36.456 [2024-12-09 06:12:30.929075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:13:36.456 [2024-12-09 06:12:30.929098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:13:36.456 [2024-12-09 06:12:30.929110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:13:36.456 [2024-12-09 06:12:30.929121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:13:36.715 [2024-12-09 06:12:31.115109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:36.715 06:12:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81841 01:13:37.540 2368.20 IOPS, 9.25 MiB/s [2024-12-09T06:12:32.127Z] [2024-12-09 06:12:31.953042] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 01:13:39.421 3157.50 IOPS, 12.33 MiB/s [2024-12-09T06:12:34.946Z] 4519.29 IOPS, 17.65 MiB/s [2024-12-09T06:12:35.884Z] 5544.62 IOPS, 21.66 MiB/s [2024-12-09T06:12:36.845Z] 6337.44 IOPS, 24.76 MiB/s [2024-12-09T06:12:36.845Z] 6970.10 IOPS, 27.23 MiB/s 01:13:42.258 Latency(us) 01:13:42.258 [2024-12-09T06:12:36.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:42.258 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:13:42.258 Verification LBA range: start 0x0 length 0x4000 01:13:42.258 NVMe0n1 : 10.01 6974.88 27.25 5407.26 0.00 10314.65 1283.08 3032026.99 01:13:42.258 [2024-12-09T06:12:36.845Z] =================================================================================================================== 01:13:42.258 [2024-12-09T06:12:36.845Z] Total : 6974.88 27.25 5407.26 0.00 10314.65 0.00 3032026.99 01:13:42.258 { 01:13:42.258 "results": [ 01:13:42.258 { 01:13:42.258 "job": "NVMe0n1", 01:13:42.258 "core_mask": "0x4", 01:13:42.258 "workload": "verify", 01:13:42.258 "status": "finished", 01:13:42.258 "verify_range": { 01:13:42.258 "start": 0, 01:13:42.258 "length": 16384 01:13:42.258 }, 01:13:42.258 "queue_depth": 128, 01:13:42.258 "io_size": 4096, 01:13:42.258 "runtime": 10.006912, 01:13:42.258 "iops": 6974.878963660318, 01:13:42.258 "mibps": 27.245620951798116, 01:13:42.258 "io_failed": 54110, 01:13:42.258 "io_timeout": 0, 01:13:42.258 "avg_latency_us": 10314.652180351743, 01:13:42.258 "min_latency_us": 1283.0843373493976, 01:13:42.258 "max_latency_us": 3032026.987951807 01:13:42.258 } 01:13:42.258 ], 01:13:42.258 "core_count": 1 01:13:42.258 } 01:13:42.258 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81713 01:13:42.258 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81713 ']' 01:13:42.258 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81713 01:13:42.258 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:13:42.524 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:42.524 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81713 01:13:42.524 killing process with pid 81713 01:13:42.524 Received shutdown signal, test time was about 10.000000 seconds 01:13:42.524 01:13:42.524 Latency(us) 01:13:42.524 [2024-12-09T06:12:37.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:42.524 [2024-12-09T06:12:37.111Z] =================================================================================================================== 01:13:42.524 [2024-12-09T06:12:37.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:13:42.524 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:13:42.524 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:13:42.524 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81713' 01:13:42.524 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81713 01:13:42.524 06:12:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81713 01:13:42.524 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81955 01:13:42.524 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 01:13:42.524 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81955 /var/tmp/bdevperf.sock 01:13:42.524 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81955 ']' 01:13:42.524 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:13:42.524 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:13:42.524 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:13:42.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:13:42.524 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:13:42.524 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:42.524 [2024-12-09 06:12:37.088176] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:13:42.524 [2024-12-09 06:12:37.088248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81955 ] 01:13:42.783 [2024-12-09 06:12:37.239948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:13:42.783 [2024-12-09 06:12:37.283594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:13:42.783 [2024-12-09 06:12:37.325106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:13:43.721 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:13:43.721 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:13:43.721 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81955 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 01:13:43.721 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81971 01:13:43.721 06:12:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 01:13:43.721 06:12:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:13:43.980 NVMe0n1 01:13:43.980 06:12:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82011 01:13:43.980 06:12:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:13:43.980 06:12:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 01:13:43.980 Running I/O for 10 seconds... 01:13:44.915 06:12:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:45.177 18038.00 IOPS, 70.46 MiB/s [2024-12-09T06:12:39.764Z] [2024-12-09 06:12:39.621360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-09 06:12:39.621477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with id:0 cdw10:00000000 cdw11:00000000 01:13:45.177 the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.177 [2024-12-09 06:12:39.621501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-09 06:12:39.621509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with id:0 cdw10:00000000 cdw11:00000000 01:13:45.177 the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.177 [2024-12-09 06:12:39.621527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:13:45.177 [2024-12-09 06:12:39.621535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 06:12:39.621543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.177 the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:13:45.177 [2024-12-09 06:12:39.621559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.177 [2024-12-09 06:12:39.621568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caae50 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.177 [2024-12-09 06:12:39.621943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.621951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.621959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.621966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.621974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.621982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.621989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.621998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24dfe10 is same with the state(6) to be set 01:13:45.178 [2024-12-09 06:12:39.622495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.178 [2024-12-09 06:12:39.622742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.178 [2024-12-09 06:12:39.622752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.622988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.622999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.179 [2024-12-09 06:12:39.623610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.179 [2024-12-09 06:12:39.623620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.623985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.623995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.180 [2024-12-09 06:12:39.624456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.180 [2024-12-09 06:12:39.624468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.624982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.624991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.625012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.625033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.625055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.625076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.625106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.625128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.625149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.625172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.181 [2024-12-09 06:12:39.625194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.181 [2024-12-09 06:12:39.625207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.182 [2024-12-09 06:12:39.625217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.182 [2024-12-09 06:12:39.625228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.182 [2024-12-09 06:12:39.625238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.182 [2024-12-09 06:12:39.625250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:13:45.182 [2024-12-09 06:12:39.625259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.182 [2024-12-09 06:12:39.625270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17920 is same with the state(6) to be set 01:13:45.182 [2024-12-09 06:12:39.625282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:13:45.182 [2024-12-09 06:12:39.625290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:13:45.182 [2024-12-09 06:12:39.625299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84920 len:8 PRP1 0x0 PRP2 0x0 01:13:45.182 [2024-12-09 06:12:39.625308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:13:45.182 [2024-12-09 06:12:39.625572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:13:45.182 [2024-12-09 06:12:39.625600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caae50 (9): Bad file descriptor 01:13:45.182 [2024-12-09 06:12:39.625706] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:45.182 [2024-12-09 06:12:39.625725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1caae50 with addr=10.0.0.3, port=4420 01:13:45.182 [2024-12-09 06:12:39.625736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caae50 is same with the state(6) to be set 01:13:45.182 [2024-12-09 06:12:39.625752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caae50 (9): Bad file descriptor 01:13:45.182 [2024-12-09 06:12:39.625768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:13:45.182 [2024-12-09 06:12:39.625778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:13:45.182 [2024-12-09 06:12:39.625789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:13:45.182 [2024-12-09 06:12:39.625800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:13:45.182 [2024-12-09 06:12:39.625811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:13:45.182 06:12:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82011 01:13:47.056 9909.00 IOPS, 38.71 MiB/s [2024-12-09T06:12:41.643Z] 6606.00 IOPS, 25.80 MiB/s [2024-12-09T06:12:41.643Z] [2024-12-09 06:12:41.622722] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:47.056 [2024-12-09 06:12:41.622767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1caae50 with addr=10.0.0.3, port=4420 01:13:47.056 [2024-12-09 06:12:41.622781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caae50 is same with the state(6) to be set 01:13:47.056 [2024-12-09 06:12:41.622803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caae50 (9): Bad file descriptor 01:13:47.056 [2024-12-09 06:12:41.622821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:13:47.056 [2024-12-09 06:12:41.622832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:13:47.056 [2024-12-09 06:12:41.622844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:13:47.056 [2024-12-09 06:12:41.622855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:13:47.056 [2024-12-09 06:12:41.622867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:13:48.933 4954.50 IOPS, 19.35 MiB/s [2024-12-09T06:12:43.779Z] 3963.60 IOPS, 15.48 MiB/s [2024-12-09T06:12:43.779Z] [2024-12-09 06:12:43.619796] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:13:49.192 [2024-12-09 06:12:43.619977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1caae50 with addr=10.0.0.3, port=4420 01:13:49.192 [2024-12-09 06:12:43.620000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caae50 is same with the state(6) to be set 01:13:49.192 [2024-12-09 06:12:43.620026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caae50 (9): Bad file descriptor 01:13:49.192 [2024-12-09 06:12:43.620045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:13:49.192 [2024-12-09 06:12:43.620057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:13:49.192 [2024-12-09 06:12:43.620068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:13:49.192 [2024-12-09 06:12:43.620080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:13:49.192 [2024-12-09 06:12:43.620107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:13:51.072 3303.00 IOPS, 12.90 MiB/s [2024-12-09T06:12:45.659Z] 2831.14 IOPS, 11.06 MiB/s [2024-12-09T06:12:45.659Z] [2024-12-09 06:12:45.616923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:13:51.072 [2024-12-09 06:12:45.617105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:13:51.072 [2024-12-09 06:12:45.617124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:13:51.072 [2024-12-09 06:12:45.617136] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 01:13:51.072 [2024-12-09 06:12:45.617149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:13:52.268 2477.25 IOPS, 9.68 MiB/s 01:13:52.268 Latency(us) 01:13:52.268 [2024-12-09T06:12:46.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:52.268 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 01:13:52.268 NVMe0n1 : 8.11 2442.61 9.54 15.78 0.00 52199.08 1204.13 7061253.96 01:13:52.268 [2024-12-09T06:12:46.855Z] =================================================================================================================== 01:13:52.268 [2024-12-09T06:12:46.855Z] Total : 2442.61 9.54 15.78 0.00 52199.08 1204.13 7061253.96 01:13:52.268 { 01:13:52.268 "results": [ 01:13:52.268 { 01:13:52.268 "job": "NVMe0n1", 01:13:52.268 "core_mask": "0x4", 01:13:52.268 "workload": "randread", 01:13:52.268 "status": "finished", 01:13:52.268 "queue_depth": 128, 01:13:52.268 "io_size": 4096, 01:13:52.268 "runtime": 8.113465, 01:13:52.268 "iops": 2442.606210786637, 01:13:52.268 "mibps": 9.5414305108853, 01:13:52.268 "io_failed": 128, 01:13:52.268 "io_timeout": 0, 01:13:52.268 "avg_latency_us": 52199.07879338471, 01:13:52.268 "min_latency_us": 1204.1253012048194, 01:13:52.268 "max_latency_us": 7061253.963052209 01:13:52.268 } 01:13:52.268 ], 01:13:52.268 "core_count": 1 01:13:52.268 } 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:13:52.268 Attaching 5 probes... 01:13:52.268 1102.888526: reset bdev controller NVMe0 01:13:52.268 1102.959103: reconnect bdev controller NVMe0 01:13:52.268 3099.947066: reconnect delay bdev controller NVMe0 01:13:52.268 3099.966084: reconnect bdev controller NVMe0 01:13:52.268 5097.018353: reconnect delay bdev controller NVMe0 01:13:52.268 5097.035146: reconnect bdev controller NVMe0 01:13:52.268 7094.231833: reconnect delay bdev controller NVMe0 01:13:52.268 7094.247083: reconnect bdev controller NVMe0 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81971 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81955 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81955 ']' 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81955 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81955 01:13:52.268 killing process with pid 81955 01:13:52.268 Received shutdown signal, test time was about 8.205003 seconds 01:13:52.268 01:13:52.268 Latency(us) 01:13:52.268 [2024-12-09T06:12:46.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:13:52.268 [2024-12-09T06:12:46.855Z] =================================================================================================================== 01:13:52.268 [2024-12-09T06:12:46.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81955' 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81955 01:13:52.268 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81955 01:13:52.527 06:12:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:13:52.527 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 01:13:52.527 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 01:13:52.527 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 01:13:52.527 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:13:52.786 rmmod nvme_tcp 01:13:52.786 rmmod nvme_fabrics 01:13:52.786 rmmod nvme_keyring 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81523 ']' 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81523 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81523 ']' 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81523 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81523 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:13:52.786 killing process with pid 81523 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81523' 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81523 01:13:52.786 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81523 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:13:53.046 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 01:13:53.305 01:13:53.305 real 0m46.156s 01:13:53.305 user 2m11.634s 01:13:53.305 sys 0m7.241s 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:13:53.305 ************************************ 01:13:53.305 END TEST nvmf_timeout 01:13:53.305 ************************************ 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:13:53.305 01:13:53.305 real 5m1.958s 01:13:53.305 user 12m27.706s 01:13:53.305 sys 1m24.707s 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:53.305 ************************************ 01:13:53.305 END TEST nvmf_host 01:13:53.305 06:12:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:13:53.305 ************************************ 01:13:53.305 06:12:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 01:13:53.305 06:12:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 01:13:53.305 01:13:53.305 real 12m8.211s 01:13:53.305 user 27m36.126s 01:13:53.305 sys 3m44.774s 01:13:53.305 06:12:47 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:53.305 06:12:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:13:53.305 ************************************ 01:13:53.305 END TEST nvmf_tcp 01:13:53.305 ************************************ 01:13:53.565 06:12:47 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 01:13:53.565 06:12:47 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:13:53.565 06:12:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:13:53.565 06:12:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:13:53.565 06:12:47 -- common/autotest_common.sh@10 -- # set +x 01:13:53.565 ************************************ 01:13:53.565 START TEST nvmf_dif 01:13:53.565 ************************************ 01:13:53.565 06:12:47 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:13:53.565 * Looking for test storage... 01:13:53.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:13:53.565 06:12:48 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:13:53.565 06:12:48 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:13:53.565 06:12:48 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 01:13:53.565 06:12:48 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@345 -- # : 1 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@353 -- # local d=1 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@355 -- # echo 1 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@353 -- # local d=2 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:13:53.565 06:12:48 nvmf_dif -- scripts/common.sh@355 -- # echo 2 01:13:53.825 06:12:48 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 01:13:53.825 06:12:48 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:13:53.825 06:12:48 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:13:53.825 06:12:48 nvmf_dif -- scripts/common.sh@368 -- # return 0 01:13:53.825 06:12:48 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:13:53.825 06:12:48 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:13:53.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:53.825 --rc genhtml_branch_coverage=1 01:13:53.825 --rc genhtml_function_coverage=1 01:13:53.825 --rc genhtml_legend=1 01:13:53.825 --rc geninfo_all_blocks=1 01:13:53.825 --rc geninfo_unexecuted_blocks=1 01:13:53.825 01:13:53.825 ' 01:13:53.825 06:12:48 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:13:53.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:53.825 --rc genhtml_branch_coverage=1 01:13:53.825 --rc genhtml_function_coverage=1 01:13:53.825 --rc genhtml_legend=1 01:13:53.825 --rc geninfo_all_blocks=1 01:13:53.825 --rc geninfo_unexecuted_blocks=1 01:13:53.825 01:13:53.825 ' 01:13:53.825 06:12:48 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:13:53.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:53.825 --rc genhtml_branch_coverage=1 01:13:53.825 --rc genhtml_function_coverage=1 01:13:53.825 --rc genhtml_legend=1 01:13:53.825 --rc geninfo_all_blocks=1 01:13:53.825 --rc geninfo_unexecuted_blocks=1 01:13:53.825 01:13:53.825 ' 01:13:53.825 06:12:48 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:13:53.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:53.825 --rc genhtml_branch_coverage=1 01:13:53.825 --rc genhtml_function_coverage=1 01:13:53.825 --rc genhtml_legend=1 01:13:53.825 --rc geninfo_all_blocks=1 01:13:53.825 --rc geninfo_unexecuted_blocks=1 01:13:53.825 01:13:53.825 ' 01:13:53.826 06:12:48 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:13:53.826 06:12:48 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 01:13:53.826 06:12:48 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:13:53.826 06:12:48 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:13:53.826 06:12:48 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:13:53.826 06:12:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:53.826 06:12:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:53.826 06:12:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:53.826 06:12:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 01:13:53.826 06:12:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@51 -- # : 0 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:13:53.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 01:13:53.826 06:12:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 01:13:53.826 06:12:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 01:13:53.826 06:12:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 01:13:53.826 06:12:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 01:13:53.826 06:12:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:13:53.826 06:12:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:13:53.826 06:12:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:13:53.826 Cannot find device "nvmf_init_br" 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@162 -- # true 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:13:53.826 Cannot find device "nvmf_init_br2" 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@163 -- # true 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:13:53.826 Cannot find device "nvmf_tgt_br" 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@164 -- # true 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:13:53.826 Cannot find device "nvmf_tgt_br2" 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@165 -- # true 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:13:53.826 Cannot find device "nvmf_init_br" 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@166 -- # true 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:13:53.826 Cannot find device "nvmf_init_br2" 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@167 -- # true 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:13:53.826 Cannot find device "nvmf_tgt_br" 01:13:53.826 06:12:48 nvmf_dif -- nvmf/common.sh@168 -- # true 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:13:53.827 Cannot find device "nvmf_tgt_br2" 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@169 -- # true 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:13:53.827 Cannot find device "nvmf_br" 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@170 -- # true 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:13:53.827 Cannot find device "nvmf_init_if" 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@171 -- # true 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:13:53.827 Cannot find device "nvmf_init_if2" 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@172 -- # true 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:13:53.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@173 -- # true 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:13:53.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@174 -- # true 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:13:53.827 06:12:48 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:13:54.086 06:12:48 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:13:54.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:13:54.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 01:13:54.087 01:13:54.087 --- 10.0.0.3 ping statistics --- 01:13:54.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:54.087 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:13:54.087 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:13:54.087 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 01:13:54.087 01:13:54.087 --- 10.0.0.4 ping statistics --- 01:13:54.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:54.087 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:13:54.087 06:12:48 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:13:54.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:13:54.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 01:13:54.087 01:13:54.087 --- 10.0.0.1 ping statistics --- 01:13:54.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:54.087 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:13:54.347 06:12:48 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:13:54.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:13:54.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 01:13:54.347 01:13:54.347 --- 10.0.0.2 ping statistics --- 01:13:54.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:54.347 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:13:54.347 06:12:48 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:13:54.347 06:12:48 nvmf_dif -- nvmf/common.sh@461 -- # return 0 01:13:54.347 06:12:48 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:13:54.347 06:12:48 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:13:54.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:13:54.916 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:13:54.916 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:13:54.916 06:12:49 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:13:54.916 06:12:49 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:13:54.916 06:12:49 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:13:54.916 06:12:49 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:13:54.916 06:12:49 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:13:54.916 06:12:49 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:13:54.916 06:12:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:13:54.916 06:12:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:13:54.916 06:12:49 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:13:54.916 06:12:49 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 01:13:54.916 06:12:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:54.916 06:12:49 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82515 01:13:54.916 06:12:49 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:13:54.917 06:12:49 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82515 01:13:54.917 06:12:49 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82515 ']' 01:13:54.917 06:12:49 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:13:54.917 06:12:49 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 01:13:54.917 06:12:49 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:13:54.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:13:54.917 06:12:49 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 01:13:54.917 06:12:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:54.917 [2024-12-09 06:12:49.411802] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:13:54.917 [2024-12-09 06:12:49.411861] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:13:55.176 [2024-12-09 06:12:49.561652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:13:55.176 [2024-12-09 06:12:49.600372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:13:55.176 [2024-12-09 06:12:49.600407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:13:55.176 [2024-12-09 06:12:49.600417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:13:55.176 [2024-12-09 06:12:49.600425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:13:55.176 [2024-12-09 06:12:49.600431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:13:55.176 [2024-12-09 06:12:49.600684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:13:55.176 [2024-12-09 06:12:49.642402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:13:55.746 06:12:50 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:13:55.746 06:12:50 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 01:13:55.746 06:12:50 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:13:55.746 06:12:50 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:55.746 06:12:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:55.746 06:12:50 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:13:55.746 06:12:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:13:55.746 06:12:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:13:55.746 06:12:50 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:55.746 06:12:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:55.746 [2024-12-09 06:12:50.314908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:13:55.746 06:12:50 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:55.746 06:12:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:13:55.747 06:12:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:13:55.747 06:12:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:13:55.747 06:12:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:56.005 ************************************ 01:13:56.005 START TEST fio_dif_1_default 01:13:56.005 ************************************ 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:56.005 bdev_null0 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:56.005 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:56.006 [2024-12-09 06:12:50.378919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:56.006 { 01:13:56.006 "params": { 01:13:56.006 "name": "Nvme$subsystem", 01:13:56.006 "trtype": "$TEST_TRANSPORT", 01:13:56.006 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:56.006 "adrfam": "ipv4", 01:13:56.006 "trsvcid": "$NVMF_PORT", 01:13:56.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:56.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:56.006 "hdgst": ${hdgst:-false}, 01:13:56.006 "ddgst": ${ddgst:-false} 01:13:56.006 }, 01:13:56.006 "method": "bdev_nvme_attach_controller" 01:13:56.006 } 01:13:56.006 EOF 01:13:56.006 )") 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:13:56.006 "params": { 01:13:56.006 "name": "Nvme0", 01:13:56.006 "trtype": "tcp", 01:13:56.006 "traddr": "10.0.0.3", 01:13:56.006 "adrfam": "ipv4", 01:13:56.006 "trsvcid": "4420", 01:13:56.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:13:56.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:13:56.006 "hdgst": false, 01:13:56.006 "ddgst": false 01:13:56.006 }, 01:13:56.006 "method": "bdev_nvme_attach_controller" 01:13:56.006 }' 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:13:56.006 06:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:56.265 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:13:56.265 fio-3.35 01:13:56.265 Starting 1 thread 01:14:08.479 01:14:08.479 filename0: (groupid=0, jobs=1): err= 0: pid=82587: Mon Dec 9 06:13:01 2024 01:14:08.479 read: IOPS=12.5k, BW=48.7MiB/s (51.0MB/s)(487MiB/10001msec) 01:14:08.479 slat (usec): min=5, max=184, avg= 5.75, stdev= 1.26 01:14:08.479 clat (usec): min=278, max=1712, avg=305.30, stdev=19.77 01:14:08.479 lat (usec): min=283, max=1741, avg=311.05, stdev=20.03 01:14:08.479 clat percentiles (usec): 01:14:08.479 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 01:14:08.479 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 302], 60.00th=[ 306], 01:14:08.479 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 330], 01:14:08.479 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 416], 99.95th=[ 469], 01:14:08.479 | 99.99th=[ 685] 01:14:08.479 bw ( KiB/s): min=49216, max=50528, per=100.00%, avg=49903.16, stdev=391.97, samples=19 01:14:08.479 iops : min=12304, max=12632, avg=12475.79, stdev=97.99, samples=19 01:14:08.479 lat (usec) : 500=99.97%, 750=0.02% 01:14:08.479 lat (msec) : 2=0.01% 01:14:08.479 cpu : usr=79.09%, sys=19.44%, ctx=40, majf=0, minf=9 01:14:08.479 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:08.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:08.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:08.479 issued rwts: total=124604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:08.479 latency : target=0, window=0, percentile=100.00%, depth=4 01:14:08.479 01:14:08.479 Run status group 0 (all jobs): 01:14:08.479 READ: bw=48.7MiB/s (51.0MB/s), 48.7MiB/s-48.7MiB/s (51.0MB/s-51.0MB/s), io=487MiB (510MB), run=10001-10001msec 01:14:08.479 06:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:14:08.479 06:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:14:08.479 06:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:14:08.479 06:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:14:08.479 06:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:14:08.479 06:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:14:08.479 06:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.479 06:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:14:08.479 06:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 01:14:08.480 real 0m11.143s 01:14:08.480 user 0m8.594s 01:14:08.480 sys 0m2.345s 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 ************************************ 01:14:08.480 END TEST fio_dif_1_default 01:14:08.480 ************************************ 01:14:08.480 06:13:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:14:08.480 06:13:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:08.480 06:13:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 ************************************ 01:14:08.480 START TEST fio_dif_1_multi_subsystems 01:14:08.480 ************************************ 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 bdev_null0 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 [2024-12-09 06:13:01.596308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 bdev_null1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:08.480 { 01:14:08.480 "params": { 01:14:08.480 "name": "Nvme$subsystem", 01:14:08.480 "trtype": "$TEST_TRANSPORT", 01:14:08.480 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:08.480 "adrfam": "ipv4", 01:14:08.480 "trsvcid": "$NVMF_PORT", 01:14:08.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:08.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:08.480 "hdgst": ${hdgst:-false}, 01:14:08.480 "ddgst": ${ddgst:-false} 01:14:08.480 }, 01:14:08.480 "method": "bdev_nvme_attach_controller" 01:14:08.480 } 01:14:08.480 EOF 01:14:08.480 )") 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:08.480 { 01:14:08.480 "params": { 01:14:08.480 "name": "Nvme$subsystem", 01:14:08.480 "trtype": "$TEST_TRANSPORT", 01:14:08.480 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:08.480 "adrfam": "ipv4", 01:14:08.480 "trsvcid": "$NVMF_PORT", 01:14:08.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:08.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:08.480 "hdgst": ${hdgst:-false}, 01:14:08.480 "ddgst": ${ddgst:-false} 01:14:08.480 }, 01:14:08.480 "method": "bdev_nvme_attach_controller" 01:14:08.480 } 01:14:08.480 EOF 01:14:08.480 )") 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 01:14:08.480 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:14:08.480 "params": { 01:14:08.480 "name": "Nvme0", 01:14:08.480 "trtype": "tcp", 01:14:08.480 "traddr": "10.0.0.3", 01:14:08.480 "adrfam": "ipv4", 01:14:08.480 "trsvcid": "4420", 01:14:08.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:08.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:08.480 "hdgst": false, 01:14:08.480 "ddgst": false 01:14:08.480 }, 01:14:08.481 "method": "bdev_nvme_attach_controller" 01:14:08.481 },{ 01:14:08.481 "params": { 01:14:08.481 "name": "Nvme1", 01:14:08.481 "trtype": "tcp", 01:14:08.481 "traddr": "10.0.0.3", 01:14:08.481 "adrfam": "ipv4", 01:14:08.481 "trsvcid": "4420", 01:14:08.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:14:08.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:14:08.481 "hdgst": false, 01:14:08.481 "ddgst": false 01:14:08.481 }, 01:14:08.481 "method": "bdev_nvme_attach_controller" 01:14:08.481 }' 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:14:08.481 06:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:08.481 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:14:08.481 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:14:08.481 fio-3.35 01:14:08.481 Starting 2 threads 01:14:18.475 01:14:18.475 filename0: (groupid=0, jobs=1): err= 0: pid=82751: Mon Dec 9 06:13:12 2024 01:14:18.475 read: IOPS=6452, BW=25.2MiB/s (26.4MB/s)(252MiB/10001msec) 01:14:18.475 slat (nsec): min=5348, max=99047, avg=10494.04, stdev=2837.65 01:14:18.475 clat (usec): min=513, max=4092, avg=592.57, stdev=55.45 01:14:18.475 lat (usec): min=522, max=4102, avg=603.06, stdev=55.52 01:14:18.475 clat percentiles (usec): 01:14:18.475 | 1.00th=[ 537], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[ 570], 01:14:18.475 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 586], 60.00th=[ 594], 01:14:18.475 | 70.00th=[ 603], 80.00th=[ 611], 90.00th=[ 627], 95.00th=[ 635], 01:14:18.475 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 1221], 99.95th=[ 1844], 01:14:18.475 | 99.99th=[ 2999] 01:14:18.475 bw ( KiB/s): min=25024, max=26112, per=50.02%, avg=25824.00, stdev=275.48, samples=19 01:14:18.475 iops : min= 6256, max= 6528, avg=6456.00, stdev=68.87, samples=19 01:14:18.475 lat (usec) : 750=99.78%, 1000=0.11% 01:14:18.475 lat (msec) : 2=0.07%, 4=0.03%, 10=0.01% 01:14:18.475 cpu : usr=87.82%, sys=11.07%, ctx=11, majf=0, minf=0 01:14:18.475 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:18.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:18.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:18.475 issued rwts: total=64536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:18.475 latency : target=0, window=0, percentile=100.00%, depth=4 01:14:18.475 filename1: (groupid=0, jobs=1): err= 0: pid=82752: Mon Dec 9 06:13:12 2024 01:14:18.475 read: IOPS=6452, BW=25.2MiB/s (26.4MB/s)(252MiB/10001msec) 01:14:18.475 slat (nsec): min=5391, max=98774, avg=10604.94, stdev=2844.43 01:14:18.475 clat (usec): min=486, max=4095, avg=592.67, stdev=57.54 01:14:18.475 lat (usec): min=492, max=4105, avg=603.27, stdev=57.92 01:14:18.475 clat percentiles (usec): 01:14:18.475 | 1.00th=[ 519], 5.00th=[ 537], 10.00th=[ 553], 20.00th=[ 570], 01:14:18.475 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 603], 01:14:18.475 | 70.00th=[ 611], 80.00th=[ 619], 90.00th=[ 627], 95.00th=[ 644], 01:14:18.475 | 99.00th=[ 668], 99.50th=[ 676], 99.90th=[ 1221], 99.95th=[ 1844], 01:14:18.475 | 99.99th=[ 3032] 01:14:18.475 bw ( KiB/s): min=25024, max=26112, per=50.02%, avg=25824.00, stdev=275.48, samples=19 01:14:18.475 iops : min= 6256, max= 6528, avg=6456.00, stdev=68.87, samples=19 01:14:18.475 lat (usec) : 500=0.03%, 750=99.76%, 1000=0.10% 01:14:18.475 lat (msec) : 2=0.07%, 4=0.03%, 10=0.01% 01:14:18.475 cpu : usr=87.22%, sys=11.63%, ctx=14, majf=0, minf=0 01:14:18.475 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:18.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:18.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:18.476 issued rwts: total=64536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:18.476 latency : target=0, window=0, percentile=100.00%, depth=4 01:14:18.476 01:14:18.476 Run status group 0 (all jobs): 01:14:18.476 READ: bw=50.4MiB/s (52.9MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=504MiB (529MB), run=10001-10001msec 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 ************************************ 01:14:18.476 END TEST fio_dif_1_multi_subsystems 01:14:18.476 ************************************ 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:18.476 01:14:18.476 real 0m11.279s 01:14:18.476 user 0m18.324s 01:14:18.476 sys 0m2.689s 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 06:13:12 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:14:18.476 06:13:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:18.476 06:13:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 ************************************ 01:14:18.476 START TEST fio_dif_rand_params 01:14:18.476 ************************************ 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 bdev_null0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:18.476 [2024-12-09 06:13:12.962194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:18.476 { 01:14:18.476 "params": { 01:14:18.476 "name": "Nvme$subsystem", 01:14:18.476 "trtype": "$TEST_TRANSPORT", 01:14:18.476 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:18.476 "adrfam": "ipv4", 01:14:18.476 "trsvcid": "$NVMF_PORT", 01:14:18.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:18.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:18.476 "hdgst": ${hdgst:-false}, 01:14:18.476 "ddgst": ${ddgst:-false} 01:14:18.476 }, 01:14:18.476 "method": "bdev_nvme_attach_controller" 01:14:18.476 } 01:14:18.476 EOF 01:14:18.476 )") 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:14:18.476 06:13:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:14:18.476 "params": { 01:14:18.476 "name": "Nvme0", 01:14:18.476 "trtype": "tcp", 01:14:18.476 "traddr": "10.0.0.3", 01:14:18.476 "adrfam": "ipv4", 01:14:18.476 "trsvcid": "4420", 01:14:18.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:18.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:18.476 "hdgst": false, 01:14:18.476 "ddgst": false 01:14:18.476 }, 01:14:18.476 "method": "bdev_nvme_attach_controller" 01:14:18.476 }' 01:14:18.476 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:18.476 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:18.476 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:18.476 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:18.476 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:14:18.476 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:18.736 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:18.736 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:18.736 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:14:18.736 06:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:18.736 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:14:18.736 ... 01:14:18.736 fio-3.35 01:14:18.736 Starting 3 threads 01:14:25.328 01:14:25.328 filename0: (groupid=0, jobs=1): err= 0: pid=82904: Mon Dec 9 06:13:18 2024 01:14:25.328 read: IOPS=335, BW=42.0MiB/s (44.0MB/s)(210MiB/5004msec) 01:14:25.328 slat (nsec): min=5559, max=40761, avg=12597.08, stdev=7572.09 01:14:25.328 clat (usec): min=7587, max=9712, avg=8902.80, stdev=153.30 01:14:25.328 lat (usec): min=7593, max=9741, avg=8915.40, stdev=153.72 01:14:25.328 clat percentiles (usec): 01:14:25.328 | 1.00th=[ 8717], 5.00th=[ 8717], 10.00th=[ 8717], 20.00th=[ 8848], 01:14:25.328 | 30.00th=[ 8848], 40.00th=[ 8848], 50.00th=[ 8848], 60.00th=[ 8848], 01:14:25.328 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 9110], 95.00th=[ 9241], 01:14:25.328 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[ 9765], 99.95th=[ 9765], 01:14:25.328 | 99.99th=[ 9765] 01:14:25.328 bw ( KiB/s): min=42240, max=43776, per=33.30%, avg=42931.20, stdev=435.95, samples=10 01:14:25.328 iops : min= 330, max= 342, avg=335.40, stdev= 3.41, samples=10 01:14:25.328 lat (msec) : 10=100.00% 01:14:25.328 cpu : usr=94.64%, sys=4.88%, ctx=6, majf=0, minf=0 01:14:25.328 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:25.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:25.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:25.328 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:25.328 latency : target=0, window=0, percentile=100.00%, depth=3 01:14:25.328 filename0: (groupid=0, jobs=1): err= 0: pid=82905: Mon Dec 9 06:13:18 2024 01:14:25.328 read: IOPS=335, BW=42.0MiB/s (44.0MB/s)(210MiB/5002msec) 01:14:25.329 slat (nsec): min=5682, max=30739, avg=9461.85, stdev=3085.59 01:14:25.329 clat (usec): min=5003, max=9977, avg=8908.14, stdev=220.29 01:14:25.329 lat (usec): min=5011, max=10006, avg=8917.60, stdev=220.48 01:14:25.329 clat percentiles (usec): 01:14:25.329 | 1.00th=[ 8717], 5.00th=[ 8717], 10.00th=[ 8848], 20.00th=[ 8848], 01:14:25.329 | 30.00th=[ 8848], 40.00th=[ 8848], 50.00th=[ 8848], 60.00th=[ 8848], 01:14:25.329 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 9110], 95.00th=[ 9241], 01:14:25.329 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[10028], 99.95th=[10028], 01:14:25.329 | 99.99th=[10028] 01:14:25.329 bw ( KiB/s): min=42240, max=43776, per=33.36%, avg=43008.00, stdev=384.00, samples=9 01:14:25.329 iops : min= 330, max= 342, avg=336.00, stdev= 3.00, samples=9 01:14:25.329 lat (msec) : 10=100.00% 01:14:25.329 cpu : usr=94.36%, sys=5.06%, ctx=12, majf=0, minf=0 01:14:25.329 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:25.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:25.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:25.329 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:25.329 latency : target=0, window=0, percentile=100.00%, depth=3 01:14:25.329 filename0: (groupid=0, jobs=1): err= 0: pid=82906: Mon Dec 9 06:13:18 2024 01:14:25.329 read: IOPS=336, BW=42.0MiB/s (44.1MB/s)(210MiB/5007msec) 01:14:25.329 slat (nsec): min=5528, max=36762, avg=13244.99, stdev=7499.06 01:14:25.329 clat (usec): min=3455, max=9588, avg=8891.64, stdev=269.50 01:14:25.329 lat (usec): min=3466, max=9611, avg=8904.88, stdev=269.70 01:14:25.329 clat percentiles (usec): 01:14:25.329 | 1.00th=[ 8717], 5.00th=[ 8717], 10.00th=[ 8717], 20.00th=[ 8848], 01:14:25.329 | 30.00th=[ 8848], 40.00th=[ 8848], 50.00th=[ 8848], 60.00th=[ 8848], 01:14:25.329 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 9110], 95.00th=[ 9241], 01:14:25.329 | 99.00th=[ 9372], 99.50th=[ 9372], 99.90th=[ 9634], 99.95th=[ 9634], 01:14:25.329 | 99.99th=[ 9634] 01:14:25.329 bw ( KiB/s): min=42240, max=43776, per=33.36%, avg=43008.00, stdev=362.04, samples=10 01:14:25.329 iops : min= 330, max= 342, avg=336.00, stdev= 2.83, samples=10 01:14:25.329 lat (msec) : 4=0.18%, 10=99.82% 01:14:25.329 cpu : usr=90.57%, sys=8.97%, ctx=12, majf=0, minf=0 01:14:25.329 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:25.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:25.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:25.329 issued rwts: total=1683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:25.329 latency : target=0, window=0, percentile=100.00%, depth=3 01:14:25.329 01:14:25.329 Run status group 0 (all jobs): 01:14:25.329 READ: bw=126MiB/s (132MB/s), 42.0MiB/s-42.0MiB/s (44.0MB/s-44.1MB/s), io=630MiB (661MB), run=5002-5007msec 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 bdev_null0 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 [2024-12-09 06:13:18.984281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 bdev_null1 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 bdev_null2 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:14:25.329 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:25.330 { 01:14:25.330 "params": { 01:14:25.330 "name": "Nvme$subsystem", 01:14:25.330 "trtype": "$TEST_TRANSPORT", 01:14:25.330 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:25.330 "adrfam": "ipv4", 01:14:25.330 "trsvcid": "$NVMF_PORT", 01:14:25.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:25.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:25.330 "hdgst": ${hdgst:-false}, 01:14:25.330 "ddgst": ${ddgst:-false} 01:14:25.330 }, 01:14:25.330 "method": "bdev_nvme_attach_controller" 01:14:25.330 } 01:14:25.330 EOF 01:14:25.330 )") 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:25.330 { 01:14:25.330 "params": { 01:14:25.330 "name": "Nvme$subsystem", 01:14:25.330 "trtype": "$TEST_TRANSPORT", 01:14:25.330 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:25.330 "adrfam": "ipv4", 01:14:25.330 "trsvcid": "$NVMF_PORT", 01:14:25.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:25.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:25.330 "hdgst": ${hdgst:-false}, 01:14:25.330 "ddgst": ${ddgst:-false} 01:14:25.330 }, 01:14:25.330 "method": "bdev_nvme_attach_controller" 01:14:25.330 } 01:14:25.330 EOF 01:14:25.330 )") 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:25.330 { 01:14:25.330 "params": { 01:14:25.330 "name": "Nvme$subsystem", 01:14:25.330 "trtype": "$TEST_TRANSPORT", 01:14:25.330 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:25.330 "adrfam": "ipv4", 01:14:25.330 "trsvcid": "$NVMF_PORT", 01:14:25.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:25.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:25.330 "hdgst": ${hdgst:-false}, 01:14:25.330 "ddgst": ${ddgst:-false} 01:14:25.330 }, 01:14:25.330 "method": "bdev_nvme_attach_controller" 01:14:25.330 } 01:14:25.330 EOF 01:14:25.330 )") 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:14:25.330 "params": { 01:14:25.330 "name": "Nvme0", 01:14:25.330 "trtype": "tcp", 01:14:25.330 "traddr": "10.0.0.3", 01:14:25.330 "adrfam": "ipv4", 01:14:25.330 "trsvcid": "4420", 01:14:25.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:25.330 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:25.330 "hdgst": false, 01:14:25.330 "ddgst": false 01:14:25.330 }, 01:14:25.330 "method": "bdev_nvme_attach_controller" 01:14:25.330 },{ 01:14:25.330 "params": { 01:14:25.330 "name": "Nvme1", 01:14:25.330 "trtype": "tcp", 01:14:25.330 "traddr": "10.0.0.3", 01:14:25.330 "adrfam": "ipv4", 01:14:25.330 "trsvcid": "4420", 01:14:25.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:14:25.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:14:25.330 "hdgst": false, 01:14:25.330 "ddgst": false 01:14:25.330 }, 01:14:25.330 "method": "bdev_nvme_attach_controller" 01:14:25.330 },{ 01:14:25.330 "params": { 01:14:25.330 "name": "Nvme2", 01:14:25.330 "trtype": "tcp", 01:14:25.330 "traddr": "10.0.0.3", 01:14:25.330 "adrfam": "ipv4", 01:14:25.330 "trsvcid": "4420", 01:14:25.330 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:14:25.330 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:14:25.330 "hdgst": false, 01:14:25.330 "ddgst": false 01:14:25.330 }, 01:14:25.330 "method": "bdev_nvme_attach_controller" 01:14:25.330 }' 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:14:25.330 06:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:25.330 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:14:25.330 ... 01:14:25.330 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:14:25.330 ... 01:14:25.330 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:14:25.330 ... 01:14:25.330 fio-3.35 01:14:25.330 Starting 24 threads 01:14:37.585 01:14:37.585 filename0: (groupid=0, jobs=1): err= 0: pid=83006: Mon Dec 9 06:13:30 2024 01:14:37.585 read: IOPS=298, BW=1194KiB/s (1223kB/s)(11.7MiB/10030msec) 01:14:37.585 slat (usec): min=5, max=8038, avg=35.66, stdev=279.84 01:14:37.585 clat (msec): min=3, max=232, avg=53.40, stdev=19.74 01:14:37.585 lat (msec): min=3, max=232, avg=53.43, stdev=19.74 01:14:37.585 clat percentiles (msec): 01:14:37.585 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 40], 01:14:37.585 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 56], 01:14:37.585 | 70.00th=[ 59], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 85], 01:14:37.585 | 99.00th=[ 123], 99.50th=[ 140], 99.90th=[ 232], 99.95th=[ 232], 01:14:37.585 | 99.99th=[ 232] 01:14:37.585 bw ( KiB/s): min= 936, max= 2294, per=4.23%, avg=1193.10, stdev=281.53, samples=20 01:14:37.585 iops : min= 234, max= 573, avg=298.25, stdev=70.28, samples=20 01:14:37.585 lat (msec) : 4=0.03%, 10=0.47%, 20=3.61%, 50=38.33%, 100=55.96% 01:14:37.585 lat (msec) : 250=1.60% 01:14:37.585 cpu : usr=43.98%, sys=1.55%, ctx=1446, majf=0, minf=9 01:14:37.585 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.3%, 16=17.0%, 32=0.0%, >=64=0.0% 01:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 complete : 0=0.0%, 4=87.9%, 8=12.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 issued rwts: total=2995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.585 filename0: (groupid=0, jobs=1): err= 0: pid=83007: Mon Dec 9 06:13:30 2024 01:14:37.585 read: IOPS=283, BW=1136KiB/s (1163kB/s)(11.1MiB/10016msec) 01:14:37.585 slat (usec): min=2, max=10057, avg=51.39, stdev=433.64 01:14:37.585 clat (msec): min=18, max=252, avg=56.08, stdev=18.16 01:14:37.585 lat (msec): min=18, max=252, avg=56.13, stdev=18.16 01:14:37.585 clat percentiles (msec): 01:14:37.585 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 43], 01:14:37.585 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 01:14:37.585 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 75], 95.00th=[ 85], 01:14:37.585 | 99.00th=[ 126], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 253], 01:14:37.585 | 99.99th=[ 253] 01:14:37.585 bw ( KiB/s): min= 768, max= 1296, per=4.02%, avg=1133.45, stdev=117.83, samples=20 01:14:37.585 iops : min= 192, max= 324, avg=283.35, stdev=29.45, samples=20 01:14:37.585 lat (msec) : 20=0.32%, 50=39.94%, 100=58.05%, 250=1.62%, 500=0.07% 01:14:37.585 cpu : usr=40.34%, sys=1.19%, ctx=1071, majf=0, minf=9 01:14:37.585 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=79.7%, 16=16.3%, 32=0.0%, >=64=0.0% 01:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 issued rwts: total=2844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.585 filename0: (groupid=0, jobs=1): err= 0: pid=83008: Mon Dec 9 06:13:30 2024 01:14:37.585 read: IOPS=293, BW=1175KiB/s (1203kB/s)(11.5MiB/10013msec) 01:14:37.585 slat (usec): min=2, max=12039, avg=46.15, stdev=439.14 01:14:37.585 clat (msec): min=15, max=169, avg=54.25, stdev=18.16 01:14:37.585 lat (msec): min=15, max=169, avg=54.29, stdev=18.17 01:14:37.585 clat percentiles (msec): 01:14:37.585 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 39], 01:14:37.585 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 54], 60.00th=[ 57], 01:14:37.585 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 85], 01:14:37.585 | 99.00th=[ 121], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 169], 01:14:37.585 | 99.99th=[ 169] 01:14:37.585 bw ( KiB/s): min= 960, max= 1328, per=4.15%, avg=1172.15, stdev=113.93, samples=20 01:14:37.585 iops : min= 240, max= 332, avg=293.00, stdev=28.44, samples=20 01:14:37.585 lat (msec) : 20=0.20%, 50=44.29%, 100=53.70%, 250=1.80% 01:14:37.585 cpu : usr=38.78%, sys=1.33%, ctx=1341, majf=0, minf=9 01:14:37.585 IO depths : 1=0.1%, 2=0.6%, 4=2.0%, 8=81.0%, 16=16.3%, 32=0.0%, >=64=0.0% 01:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 issued rwts: total=2942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.585 filename0: (groupid=0, jobs=1): err= 0: pid=83009: Mon Dec 9 06:13:30 2024 01:14:37.585 read: IOPS=287, BW=1151KiB/s (1179kB/s)(11.3MiB/10036msec) 01:14:37.585 slat (usec): min=5, max=8054, avg=26.93, stdev=264.85 01:14:37.585 clat (msec): min=5, max=229, avg=55.43, stdev=19.96 01:14:37.585 lat (msec): min=5, max=229, avg=55.46, stdev=19.96 01:14:37.585 clat percentiles (msec): 01:14:37.585 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 42], 01:14:37.585 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 60], 01:14:37.585 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 74], 95.00th=[ 87], 01:14:37.585 | 99.00th=[ 127], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 230], 01:14:37.585 | 99.99th=[ 230] 01:14:37.585 bw ( KiB/s): min= 768, max= 2176, per=4.07%, avg=1148.80, stdev=273.19, samples=20 01:14:37.585 iops : min= 192, max= 544, avg=287.20, stdev=68.30, samples=20 01:14:37.585 lat (msec) : 10=1.18%, 20=2.63%, 50=36.15%, 100=58.31%, 250=1.73% 01:14:37.585 cpu : usr=37.11%, sys=1.60%, ctx=1049, majf=0, minf=9 01:14:37.585 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.0%, 16=16.0%, 32=0.0%, >=64=0.0% 01:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 complete : 0=0.0%, 4=89.5%, 8=9.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 issued rwts: total=2888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.585 filename0: (groupid=0, jobs=1): err= 0: pid=83010: Mon Dec 9 06:13:30 2024 01:14:37.585 read: IOPS=297, BW=1190KiB/s (1219kB/s)(11.6MiB/10022msec) 01:14:37.585 slat (usec): min=3, max=8032, avg=34.04, stdev=247.28 01:14:37.585 clat (msec): min=15, max=252, avg=53.59, stdev=21.47 01:14:37.585 lat (msec): min=15, max=252, avg=53.63, stdev=21.47 01:14:37.585 clat percentiles (msec): 01:14:37.585 | 1.00th=[ 19], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 40], 01:14:37.585 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 53], 60.00th=[ 56], 01:14:37.585 | 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 74], 95.00th=[ 84], 01:14:37.585 | 99.00th=[ 113], 99.50th=[ 243], 99.90th=[ 253], 99.95th=[ 253], 01:14:37.585 | 99.99th=[ 253] 01:14:37.585 bw ( KiB/s): min= 832, max= 1664, per=4.21%, avg=1188.65, stdev=167.98, samples=20 01:14:37.585 iops : min= 208, max= 416, avg=297.15, stdev=41.99, samples=20 01:14:37.585 lat (msec) : 20=1.01%, 50=42.29%, 100=55.60%, 250=0.64%, 500=0.47% 01:14:37.585 cpu : usr=44.08%, sys=1.43%, ctx=1430, majf=0, minf=9 01:14:37.585 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.1%, 16=16.3%, 32=0.0%, >=64=0.0% 01:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 issued rwts: total=2982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.585 filename0: (groupid=0, jobs=1): err= 0: pid=83011: Mon Dec 9 06:13:30 2024 01:14:37.585 read: IOPS=303, BW=1214KiB/s (1244kB/s)(11.9MiB/10010msec) 01:14:37.585 slat (usec): min=3, max=8046, avg=38.17, stdev=324.65 01:14:37.585 clat (msec): min=14, max=224, avg=52.52, stdev=20.05 01:14:37.585 lat (msec): min=14, max=224, avg=52.56, stdev=20.05 01:14:37.585 clat percentiles (msec): 01:14:37.585 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 37], 01:14:37.585 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 56], 01:14:37.585 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 84], 01:14:37.585 | 99.00th=[ 118], 99.50th=[ 167], 99.90th=[ 224], 99.95th=[ 224], 01:14:37.585 | 99.99th=[ 224] 01:14:37.585 bw ( KiB/s): min= 784, max= 1392, per=4.29%, avg=1211.55, stdev=154.25, samples=20 01:14:37.585 iops : min= 196, max= 348, avg=302.85, stdev=38.52, samples=20 01:14:37.585 lat (msec) : 20=0.36%, 50=49.36%, 100=49.13%, 250=1.15% 01:14:37.585 cpu : usr=37.93%, sys=1.59%, ctx=1148, majf=0, minf=9 01:14:37.585 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=81.1%, 16=15.6%, 32=0.0%, >=64=0.0% 01:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.585 issued rwts: total=3039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.585 filename0: (groupid=0, jobs=1): err= 0: pid=83012: Mon Dec 9 06:13:30 2024 01:14:37.585 read: IOPS=292, BW=1170KiB/s (1198kB/s)(11.4MiB/10008msec) 01:14:37.585 slat (usec): min=2, max=10043, avg=37.83, stdev=357.20 01:14:37.585 clat (msec): min=8, max=228, avg=54.57, stdev=19.76 01:14:37.585 lat (msec): min=8, max=228, avg=54.61, stdev=19.76 01:14:37.585 clat percentiles (msec): 01:14:37.585 | 1.00th=[ 20], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 01:14:37.585 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 58], 01:14:37.585 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 73], 95.00th=[ 85], 01:14:37.585 | 99.00th=[ 112], 99.50th=[ 165], 99.90th=[ 228], 99.95th=[ 228], 01:14:37.585 | 99.99th=[ 228] 01:14:37.585 bw ( KiB/s): min= 784, max= 1376, per=4.13%, avg=1164.40, stdev=145.13, samples=20 01:14:37.585 iops : min= 196, max= 344, avg=291.10, stdev=36.28, samples=20 01:14:37.585 lat (msec) : 10=0.24%, 20=0.79%, 50=43.66%, 100=54.15%, 250=1.16% 01:14:37.585 cpu : usr=33.98%, sys=1.55%, ctx=984, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.0%, 16=16.0%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=2927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename0: (groupid=0, jobs=1): err= 0: pid=83013: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=298, BW=1195KiB/s (1223kB/s)(11.7MiB/10002msec) 01:14:37.586 slat (usec): min=2, max=8047, avg=51.49, stdev=484.39 01:14:37.586 clat (msec): min=8, max=165, avg=53.35, stdev=18.21 01:14:37.586 lat (msec): min=8, max=165, avg=53.40, stdev=18.22 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 38], 01:14:37.586 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 01:14:37.586 | 70.00th=[ 61], 80.00th=[ 61], 90.00th=[ 73], 95.00th=[ 85], 01:14:37.586 | 99.00th=[ 120], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 165], 01:14:37.586 | 99.99th=[ 165] 01:14:37.586 bw ( KiB/s): min= 880, max= 1328, per=4.20%, avg=1184.11, stdev=106.07, samples=19 01:14:37.586 iops : min= 220, max= 332, avg=296.00, stdev=26.50, samples=19 01:14:37.586 lat (msec) : 10=0.20%, 20=0.27%, 50=48.98%, 100=48.91%, 250=1.64% 01:14:37.586 cpu : usr=33.36%, sys=1.37%, ctx=986, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.2%, 16=16.0%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=2987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename1: (groupid=0, jobs=1): err= 0: pid=83014: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=299, BW=1197KiB/s (1226kB/s)(11.7MiB/10023msec) 01:14:37.586 slat (usec): min=5, max=8049, avg=39.00, stdev=303.82 01:14:37.586 clat (msec): min=14, max=226, avg=53.26, stdev=18.18 01:14:37.586 lat (msec): min=14, max=226, avg=53.29, stdev=18.18 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 40], 01:14:37.586 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 56], 01:14:37.586 | 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 73], 95.00th=[ 84], 01:14:37.586 | 99.00th=[ 123], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 226], 01:14:37.586 | 99.99th=[ 226] 01:14:37.586 bw ( KiB/s): min= 672, max= 1664, per=4.24%, avg=1195.60, stdev=184.04, samples=20 01:14:37.586 iops : min= 168, max= 416, avg=298.90, stdev=46.01, samples=20 01:14:37.586 lat (msec) : 20=0.90%, 50=43.67%, 100=53.80%, 250=1.63% 01:14:37.586 cpu : usr=41.50%, sys=1.53%, ctx=1363, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.8%, 16=16.0%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=3000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename1: (groupid=0, jobs=1): err= 0: pid=83015: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=288, BW=1154KiB/s (1182kB/s)(11.3MiB/10004msec) 01:14:37.586 slat (usec): min=2, max=8049, avg=37.65, stdev=354.40 01:14:37.586 clat (msec): min=9, max=232, avg=55.30, stdev=18.24 01:14:37.586 lat (msec): min=9, max=232, avg=55.34, stdev=18.25 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 43], 01:14:37.586 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 58], 01:14:37.586 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 84], 01:14:37.586 | 99.00th=[ 122], 99.50th=[ 140], 99.90th=[ 232], 99.95th=[ 232], 01:14:37.586 | 99.99th=[ 232] 01:14:37.586 bw ( KiB/s): min= 968, max= 1408, per=4.05%, avg=1144.74, stdev=101.76, samples=19 01:14:37.586 iops : min= 242, max= 352, avg=286.16, stdev=25.44, samples=19 01:14:37.586 lat (msec) : 10=0.21%, 20=0.28%, 50=42.26%, 100=55.66%, 250=1.59% 01:14:37.586 cpu : usr=36.16%, sys=1.34%, ctx=1070, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=81.7%, 16=17.0%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=88.1%, 8=11.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=2887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename1: (groupid=0, jobs=1): err= 0: pid=83016: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=291, BW=1168KiB/s (1196kB/s)(11.4MiB/10015msec) 01:14:37.586 slat (usec): min=2, max=9036, avg=51.49, stdev=450.31 01:14:37.586 clat (msec): min=16, max=255, avg=54.59, stdev=18.79 01:14:37.586 lat (msec): min=16, max=255, avg=54.64, stdev=18.79 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 41], 01:14:37.586 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 57], 01:14:37.586 | 70.00th=[ 59], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 85], 01:14:37.586 | 99.00th=[ 127], 99.50th=[ 165], 99.90th=[ 165], 99.95th=[ 255], 01:14:37.586 | 99.99th=[ 255] 01:14:37.586 bw ( KiB/s): min= 880, max= 1408, per=4.12%, avg=1163.40, stdev=119.88, samples=20 01:14:37.586 iops : min= 220, max= 352, avg=290.80, stdev=29.96, samples=20 01:14:37.586 lat (msec) : 20=0.10%, 50=41.45%, 100=56.53%, 250=1.85%, 500=0.07% 01:14:37.586 cpu : usr=36.72%, sys=1.46%, ctx=1223, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.1%, 16=16.3%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=2924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename1: (groupid=0, jobs=1): err= 0: pid=83017: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=284, BW=1138KiB/s (1166kB/s)(11.2MiB/10033msec) 01:14:37.586 slat (usec): min=5, max=8041, avg=27.97, stdev=309.25 01:14:37.586 clat (msec): min=10, max=252, avg=56.03, stdev=21.70 01:14:37.586 lat (msec): min=10, max=252, avg=56.06, stdev=21.70 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 14], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 47], 01:14:37.586 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 60], 01:14:37.586 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 85], 01:14:37.586 | 99.00th=[ 121], 99.50th=[ 241], 99.90th=[ 253], 99.95th=[ 253], 01:14:37.586 | 99.99th=[ 253] 01:14:37.586 bw ( KiB/s): min= 800, max= 1824, per=4.02%, avg=1135.60, stdev=198.80, samples=20 01:14:37.586 iops : min= 200, max= 456, avg=283.90, stdev=49.70, samples=20 01:14:37.586 lat (msec) : 20=2.03%, 50=39.89%, 100=56.95%, 250=0.63%, 500=0.49% 01:14:37.586 cpu : usr=33.58%, sys=1.54%, ctx=909, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=79.7%, 16=17.0%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=88.7%, 8=10.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=2855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename1: (groupid=0, jobs=1): err= 0: pid=83018: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=317, BW=1272KiB/s (1303kB/s)(12.5MiB/10041msec) 01:14:37.586 slat (usec): min=5, max=11035, avg=31.37, stdev=298.32 01:14:37.586 clat (usec): min=1286, max=238528, avg=50089.64, stdev=23494.34 01:14:37.586 lat (usec): min=1295, max=238548, avg=50121.02, stdev=23501.21 01:14:37.586 clat percentiles (usec): 01:14:37.586 | 1.00th=[ 1565], 5.00th=[ 2540], 10.00th=[ 13960], 20.00th=[ 35914], 01:14:37.586 | 30.00th=[ 41681], 40.00th=[ 47973], 50.00th=[ 51643], 60.00th=[ 55837], 01:14:37.586 | 70.00th=[ 58983], 80.00th=[ 63701], 90.00th=[ 73925], 95.00th=[ 84411], 01:14:37.586 | 99.00th=[115868], 99.50th=[135267], 99.90th=[166724], 99.95th=[238027], 01:14:37.586 | 99.99th=[238027] 01:14:37.586 bw ( KiB/s): min= 912, max= 4079, per=4.50%, avg=1271.95, stdev=670.17, samples=20 01:14:37.586 iops : min= 228, max= 1019, avg=317.95, stdev=167.38, samples=20 01:14:37.586 lat (msec) : 2=2.00%, 4=4.51%, 10=1.50%, 20=3.35%, 50=33.92% 01:14:37.586 lat (msec) : 100=53.21%, 250=1.50% 01:14:37.586 cpu : usr=42.73%, sys=1.61%, ctx=1297, majf=0, minf=0 01:14:37.586 IO depths : 1=0.5%, 2=1.2%, 4=3.0%, 8=79.1%, 16=16.2%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=88.6%, 8=10.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=3193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename1: (groupid=0, jobs=1): err= 0: pid=83019: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=300, BW=1203KiB/s (1232kB/s)(11.8MiB/10003msec) 01:14:37.586 slat (usec): min=2, max=8053, avg=39.93, stdev=376.97 01:14:37.586 clat (msec): min=8, max=228, avg=53.04, stdev=20.00 01:14:37.586 lat (msec): min=8, max=228, avg=53.08, stdev=20.00 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 38], 01:14:37.586 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 01:14:37.586 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 85], 01:14:37.586 | 99.00th=[ 120], 99.50th=[ 159], 99.90th=[ 228], 99.95th=[ 228], 01:14:37.586 | 99.99th=[ 228] 01:14:37.586 bw ( KiB/s): min= 872, max= 1394, per=4.21%, avg=1189.42, stdev=129.86, samples=19 01:14:37.586 iops : min= 218, max= 348, avg=297.32, stdev=32.41, samples=19 01:14:37.586 lat (msec) : 10=0.23%, 20=0.27%, 50=49.63%, 100=48.70%, 250=1.16% 01:14:37.586 cpu : usr=34.73%, sys=1.52%, ctx=1017, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.1%, 16=16.0%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=3008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename1: (groupid=0, jobs=1): err= 0: pid=83020: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=299, BW=1200KiB/s (1229kB/s)(11.8MiB/10029msec) 01:14:37.586 slat (usec): min=5, max=8063, avg=46.76, stdev=420.35 01:14:37.586 clat (msec): min=21, max=238, avg=53.11, stdev=18.38 01:14:37.586 lat (msec): min=21, max=238, avg=53.15, stdev=18.38 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 38], 01:14:37.586 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 01:14:37.586 | 70.00th=[ 61], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 85], 01:14:37.586 | 99.00th=[ 118], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 239], 01:14:37.586 | 99.99th=[ 239] 01:14:37.586 bw ( KiB/s): min= 888, max= 1539, per=4.24%, avg=1198.00, stdev=139.52, samples=20 01:14:37.586 iops : min= 222, max= 384, avg=299.45, stdev=34.78, samples=20 01:14:37.586 lat (msec) : 50=49.80%, 100=48.60%, 250=1.60% 01:14:37.586 cpu : usr=38.03%, sys=1.52%, ctx=1002, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.4%, 16=16.3%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=3008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename1: (groupid=0, jobs=1): err= 0: pid=83021: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=287, BW=1150KiB/s (1178kB/s)(11.3MiB/10035msec) 01:14:37.586 slat (usec): min=5, max=8098, avg=31.58, stdev=227.24 01:14:37.586 clat (msec): min=15, max=229, avg=55.47, stdev=19.08 01:14:37.586 lat (msec): min=15, max=229, avg=55.50, stdev=19.08 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 20], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 43], 01:14:37.586 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 58], 01:14:37.586 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 78], 95.00th=[ 85], 01:14:37.586 | 99.00th=[ 125], 99.50th=[ 163], 99.90th=[ 165], 99.95th=[ 230], 01:14:37.586 | 99.99th=[ 230] 01:14:37.586 bw ( KiB/s): min= 808, max= 1795, per=4.07%, avg=1149.10, stdev=188.98, samples=20 01:14:37.586 iops : min= 202, max= 448, avg=287.20, stdev=47.09, samples=20 01:14:37.586 lat (msec) : 20=1.11%, 50=36.07%, 100=60.95%, 250=1.87% 01:14:37.586 cpu : usr=36.48%, sys=1.28%, ctx=1366, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=79.2%, 16=16.6%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=88.8%, 8=10.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=2886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename2: (groupid=0, jobs=1): err= 0: pid=83022: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=305, BW=1221KiB/s (1251kB/s)(12.0MiB/10058msec) 01:14:37.586 slat (usec): min=5, max=8023, avg=22.84, stdev=190.70 01:14:37.586 clat (msec): min=2, max=230, avg=52.21, stdev=22.18 01:14:37.586 lat (msec): min=2, max=230, avg=52.23, stdev=22.19 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 24], 20.00th=[ 37], 01:14:37.586 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 59], 01:14:37.586 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 73], 95.00th=[ 85], 01:14:37.586 | 99.00th=[ 129], 99.50th=[ 136], 99.90th=[ 163], 99.95th=[ 230], 01:14:37.586 | 99.99th=[ 230] 01:14:37.586 bw ( KiB/s): min= 912, max= 3424, per=4.33%, avg=1222.00, stdev=528.82, samples=20 01:14:37.586 iops : min= 228, max= 856, avg=305.50, stdev=132.21, samples=20 01:14:37.586 lat (msec) : 4=4.17%, 10=0.65%, 20=4.10%, 50=36.89%, 100=52.65% 01:14:37.586 lat (msec) : 250=1.53% 01:14:37.586 cpu : usr=33.49%, sys=1.34%, ctx=910, majf=0, minf=0 01:14:37.586 IO depths : 1=0.3%, 2=0.9%, 4=2.3%, 8=79.8%, 16=16.7%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=88.6%, 8=10.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=3071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.586 filename2: (groupid=0, jobs=1): err= 0: pid=83023: Mon Dec 9 06:13:30 2024 01:14:37.586 read: IOPS=288, BW=1153KiB/s (1181kB/s)(11.3MiB/10019msec) 01:14:37.586 slat (usec): min=2, max=6016, avg=22.81, stdev=134.77 01:14:37.586 clat (msec): min=16, max=230, avg=55.36, stdev=18.76 01:14:37.586 lat (msec): min=16, max=230, avg=55.38, stdev=18.76 01:14:37.586 clat percentiles (msec): 01:14:37.586 | 1.00th=[ 20], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 39], 01:14:37.586 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 60], 01:14:37.586 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 73], 95.00th=[ 85], 01:14:37.586 | 99.00th=[ 125], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 230], 01:14:37.586 | 99.99th=[ 230] 01:14:37.586 bw ( KiB/s): min= 848, max= 1536, per=4.08%, avg=1151.15, stdev=161.64, samples=20 01:14:37.586 iops : min= 212, max= 384, avg=287.75, stdev=40.36, samples=20 01:14:37.586 lat (msec) : 20=1.21%, 50=41.86%, 100=54.81%, 250=2.11% 01:14:37.586 cpu : usr=35.67%, sys=1.39%, ctx=966, majf=0, minf=9 01:14:37.586 IO depths : 1=0.1%, 2=1.7%, 4=6.6%, 8=76.3%, 16=15.4%, 32=0.0%, >=64=0.0% 01:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 complete : 0=0.0%, 4=89.1%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.586 issued rwts: total=2888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.587 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.587 filename2: (groupid=0, jobs=1): err= 0: pid=83024: Mon Dec 9 06:13:30 2024 01:14:37.587 read: IOPS=285, BW=1143KiB/s (1171kB/s)(11.2MiB/10036msec) 01:14:37.587 slat (usec): min=5, max=8049, avg=46.67, stdev=436.26 01:14:37.587 clat (msec): min=5, max=227, avg=55.74, stdev=21.16 01:14:37.587 lat (msec): min=5, max=227, avg=55.78, stdev=21.17 01:14:37.587 clat percentiles (msec): 01:14:37.587 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 45], 01:14:37.587 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 58], 01:14:37.587 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 79], 95.00th=[ 84], 01:14:37.587 | 99.00th=[ 120], 99.50th=[ 163], 99.90th=[ 228], 99.95th=[ 228], 01:14:37.587 | 99.99th=[ 228] 01:14:37.587 bw ( KiB/s): min= 768, max= 2048, per=4.04%, avg=1141.20, stdev=241.60, samples=20 01:14:37.587 iops : min= 192, max= 512, avg=285.30, stdev=60.40, samples=20 01:14:37.587 lat (msec) : 10=1.12%, 20=2.23%, 50=34.75%, 100=60.61%, 250=1.29% 01:14:37.587 cpu : usr=36.65%, sys=1.27%, ctx=1233, majf=0, minf=9 01:14:37.587 IO depths : 1=0.1%, 2=1.4%, 4=5.1%, 8=77.1%, 16=16.3%, 32=0.0%, >=64=0.0% 01:14:37.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 complete : 0=0.0%, 4=89.3%, 8=9.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 issued rwts: total=2869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.587 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.587 filename2: (groupid=0, jobs=1): err= 0: pid=83025: Mon Dec 9 06:13:30 2024 01:14:37.587 read: IOPS=285, BW=1142KiB/s (1170kB/s)(11.2MiB/10018msec) 01:14:37.587 slat (usec): min=4, max=10134, avg=36.30, stdev=315.15 01:14:37.587 clat (msec): min=19, max=261, avg=55.83, stdev=19.21 01:14:37.587 lat (msec): min=19, max=261, avg=55.86, stdev=19.21 01:14:37.587 clat percentiles (msec): 01:14:37.587 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 41], 01:14:37.587 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 57], 01:14:37.587 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 77], 95.00th=[ 87], 01:14:37.587 | 99.00th=[ 121], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 262], 01:14:37.587 | 99.99th=[ 262] 01:14:37.587 bw ( KiB/s): min= 766, max= 1510, per=4.04%, avg=1139.80, stdev=175.63, samples=20 01:14:37.587 iops : min= 191, max= 377, avg=284.90, stdev=43.91, samples=20 01:14:37.587 lat (msec) : 20=0.49%, 50=41.87%, 100=55.92%, 250=1.64%, 500=0.07% 01:14:37.587 cpu : usr=39.72%, sys=1.29%, ctx=1225, majf=0, minf=9 01:14:37.587 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.4%, 16=15.4%, 32=0.0%, >=64=0.0% 01:14:37.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 issued rwts: total=2861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.587 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.587 filename2: (groupid=0, jobs=1): err= 0: pid=83026: Mon Dec 9 06:13:30 2024 01:14:37.587 read: IOPS=290, BW=1161KiB/s (1188kB/s)(11.4MiB/10036msec) 01:14:37.587 slat (usec): min=5, max=8055, avg=36.63, stdev=317.79 01:14:37.587 clat (msec): min=4, max=233, avg=54.95, stdev=19.60 01:14:37.587 lat (msec): min=4, max=233, avg=54.98, stdev=19.61 01:14:37.587 clat percentiles (msec): 01:14:37.587 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 42], 01:14:37.587 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 01:14:37.587 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 75], 95.00th=[ 85], 01:14:37.587 | 99.00th=[ 132], 99.50th=[ 161], 99.90th=[ 163], 99.95th=[ 234], 01:14:37.587 | 99.99th=[ 234] 01:14:37.587 bw ( KiB/s): min= 784, max= 2084, per=4.10%, avg=1158.60, stdev=255.52, samples=20 01:14:37.587 iops : min= 196, max= 521, avg=289.65, stdev=63.88, samples=20 01:14:37.587 lat (msec) : 10=0.69%, 20=2.75%, 50=37.29%, 100=57.55%, 250=1.72% 01:14:37.587 cpu : usr=38.41%, sys=1.24%, ctx=1098, majf=0, minf=9 01:14:37.587 IO depths : 1=0.1%, 2=0.7%, 4=3.1%, 8=79.4%, 16=16.8%, 32=0.0%, >=64=0.0% 01:14:37.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 complete : 0=0.0%, 4=88.7%, 8=10.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 issued rwts: total=2912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.587 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.587 filename2: (groupid=0, jobs=1): err= 0: pid=83027: Mon Dec 9 06:13:30 2024 01:14:37.587 read: IOPS=302, BW=1210KiB/s (1239kB/s)(11.9MiB/10030msec) 01:14:37.587 slat (usec): min=4, max=9035, avg=27.23, stdev=231.24 01:14:37.587 clat (msec): min=9, max=227, avg=52.74, stdev=19.01 01:14:37.587 lat (msec): min=9, max=227, avg=52.77, stdev=19.01 01:14:37.587 clat percentiles (msec): 01:14:37.587 | 1.00th=[ 13], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 38], 01:14:37.587 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 01:14:37.587 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 84], 01:14:37.587 | 99.00th=[ 130], 99.50th=[ 142], 99.90th=[ 169], 99.95th=[ 228], 01:14:37.587 | 99.99th=[ 228] 01:14:37.587 bw ( KiB/s): min= 904, max= 1936, per=4.28%, avg=1208.80, stdev=211.91, samples=20 01:14:37.587 iops : min= 226, max= 484, avg=302.20, stdev=52.98, samples=20 01:14:37.587 lat (msec) : 10=0.46%, 20=1.25%, 50=46.34%, 100=50.36%, 250=1.58% 01:14:37.587 cpu : usr=36.74%, sys=1.31%, ctx=1135, majf=0, minf=9 01:14:37.587 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.6%, 16=16.3%, 32=0.0%, >=64=0.0% 01:14:37.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 issued rwts: total=3034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.587 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.587 filename2: (groupid=0, jobs=1): err= 0: pid=83028: Mon Dec 9 06:13:30 2024 01:14:37.587 read: IOPS=294, BW=1177KiB/s (1205kB/s)(11.5MiB/10020msec) 01:14:37.587 slat (usec): min=3, max=8045, avg=33.11, stdev=252.95 01:14:37.587 clat (msec): min=24, max=234, avg=54.18, stdev=17.96 01:14:37.587 lat (msec): min=24, max=234, avg=54.22, stdev=17.96 01:14:37.587 clat percentiles (msec): 01:14:37.587 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 01:14:37.587 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 56], 01:14:37.587 | 70.00th=[ 59], 80.00th=[ 64], 90.00th=[ 73], 95.00th=[ 85], 01:14:37.587 | 99.00th=[ 124], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 234], 01:14:37.587 | 99.99th=[ 236] 01:14:37.587 bw ( KiB/s): min= 768, max= 1328, per=4.16%, avg=1175.30, stdev=122.26, samples=20 01:14:37.587 iops : min= 192, max= 332, avg=293.80, stdev=30.55, samples=20 01:14:37.587 lat (msec) : 50=44.12%, 100=54.26%, 250=1.63% 01:14:37.587 cpu : usr=42.51%, sys=1.73%, ctx=1540, majf=0, minf=9 01:14:37.587 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=80.6%, 16=15.9%, 32=0.0%, >=64=0.0% 01:14:37.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 issued rwts: total=2949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.587 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.587 filename2: (groupid=0, jobs=1): err= 0: pid=83029: Mon Dec 9 06:13:30 2024 01:14:37.587 read: IOPS=300, BW=1200KiB/s (1229kB/s)(11.7MiB/10007msec) 01:14:37.587 slat (usec): min=2, max=8050, avg=46.56, stdev=467.83 01:14:37.587 clat (msec): min=8, max=169, avg=53.12, stdev=18.47 01:14:37.587 lat (msec): min=8, max=169, avg=53.17, stdev=18.48 01:14:37.587 clat percentiles (msec): 01:14:37.587 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 37], 01:14:37.587 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 01:14:37.587 | 70.00th=[ 61], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 85], 01:14:37.587 | 99.00th=[ 121], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 01:14:37.587 | 99.99th=[ 169] 01:14:37.587 bw ( KiB/s): min= 960, max= 1410, per=4.23%, avg=1194.90, stdev=114.62, samples=20 01:14:37.587 iops : min= 240, max= 352, avg=298.70, stdev=28.60, samples=20 01:14:37.587 lat (msec) : 10=0.23%, 20=0.20%, 50=49.95%, 100=47.89%, 250=1.73% 01:14:37.587 cpu : usr=34.17%, sys=1.34%, ctx=949, majf=0, minf=9 01:14:37.587 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.8%, 16=16.3%, 32=0.0%, >=64=0.0% 01:14:37.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:37.587 issued rwts: total=3003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:37.587 latency : target=0, window=0, percentile=100.00%, depth=16 01:14:37.587 01:14:37.587 Run status group 0 (all jobs): 01:14:37.587 READ: bw=27.6MiB/s (28.9MB/s), 1136KiB/s-1272KiB/s (1163kB/s-1303kB/s), io=277MiB (291MB), run=10002-10058msec 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 bdev_null0 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 [2024-12-09 06:13:30.513133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 bdev_null1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:37.587 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:37.588 { 01:14:37.588 "params": { 01:14:37.588 "name": "Nvme$subsystem", 01:14:37.588 "trtype": "$TEST_TRANSPORT", 01:14:37.588 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:37.588 "adrfam": "ipv4", 01:14:37.588 "trsvcid": "$NVMF_PORT", 01:14:37.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:37.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:37.588 "hdgst": ${hdgst:-false}, 01:14:37.588 "ddgst": ${ddgst:-false} 01:14:37.588 }, 01:14:37.588 "method": "bdev_nvme_attach_controller" 01:14:37.588 } 01:14:37.588 EOF 01:14:37.588 )") 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:37.588 { 01:14:37.588 "params": { 01:14:37.588 "name": "Nvme$subsystem", 01:14:37.588 "trtype": "$TEST_TRANSPORT", 01:14:37.588 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:37.588 "adrfam": "ipv4", 01:14:37.588 "trsvcid": "$NVMF_PORT", 01:14:37.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:37.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:37.588 "hdgst": ${hdgst:-false}, 01:14:37.588 "ddgst": ${ddgst:-false} 01:14:37.588 }, 01:14:37.588 "method": "bdev_nvme_attach_controller" 01:14:37.588 } 01:14:37.588 EOF 01:14:37.588 )") 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:14:37.588 "params": { 01:14:37.588 "name": "Nvme0", 01:14:37.588 "trtype": "tcp", 01:14:37.588 "traddr": "10.0.0.3", 01:14:37.588 "adrfam": "ipv4", 01:14:37.588 "trsvcid": "4420", 01:14:37.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:37.588 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:37.588 "hdgst": false, 01:14:37.588 "ddgst": false 01:14:37.588 }, 01:14:37.588 "method": "bdev_nvme_attach_controller" 01:14:37.588 },{ 01:14:37.588 "params": { 01:14:37.588 "name": "Nvme1", 01:14:37.588 "trtype": "tcp", 01:14:37.588 "traddr": "10.0.0.3", 01:14:37.588 "adrfam": "ipv4", 01:14:37.588 "trsvcid": "4420", 01:14:37.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:14:37.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:14:37.588 "hdgst": false, 01:14:37.588 "ddgst": false 01:14:37.588 }, 01:14:37.588 "method": "bdev_nvme_attach_controller" 01:14:37.588 }' 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:14:37.588 06:13:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:37.588 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:14:37.588 ... 01:14:37.588 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:14:37.588 ... 01:14:37.588 fio-3.35 01:14:37.588 Starting 4 threads 01:14:42.858 01:14:42.858 filename0: (groupid=0, jobs=1): err= 0: pid=83171: Mon Dec 9 06:13:36 2024 01:14:42.858 read: IOPS=2450, BW=19.1MiB/s (20.1MB/s)(95.7MiB/5001msec) 01:14:42.858 slat (usec): min=4, max=157, avg=18.89, stdev=11.97 01:14:42.858 clat (usec): min=632, max=7601, avg=3198.03, stdev=722.45 01:14:42.858 lat (usec): min=643, max=7608, avg=3216.93, stdev=722.43 01:14:42.858 clat percentiles (usec): 01:14:42.858 | 1.00th=[ 1336], 5.00th=[ 1795], 10.00th=[ 2057], 20.00th=[ 2671], 01:14:42.858 | 30.00th=[ 3097], 40.00th=[ 3195], 50.00th=[ 3261], 60.00th=[ 3425], 01:14:42.858 | 70.00th=[ 3556], 80.00th=[ 3752], 90.00th=[ 4015], 95.00th=[ 4228], 01:14:42.858 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5800], 99.95th=[ 6325], 01:14:42.858 | 99.99th=[ 6390] 01:14:42.858 bw ( KiB/s): min=17392, max=20848, per=23.09%, avg=19696.00, stdev=1053.51, samples=9 01:14:42.858 iops : min= 2174, max= 2606, avg=2462.00, stdev=131.69, samples=9 01:14:42.858 lat (usec) : 750=0.01%, 1000=0.09% 01:14:42.858 lat (msec) : 2=8.98%, 4=80.65%, 10=10.27% 01:14:42.858 cpu : usr=94.32%, sys=4.48%, ctx=103, majf=0, minf=9 01:14:42.858 IO depths : 1=1.9%, 2=15.3%, 4=55.5%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:42.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:42.858 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:42.858 issued rwts: total=12255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:42.858 latency : target=0, window=0, percentile=100.00%, depth=8 01:14:42.858 filename0: (groupid=0, jobs=1): err= 0: pid=83172: Mon Dec 9 06:13:36 2024 01:14:42.858 read: IOPS=2480, BW=19.4MiB/s (20.3MB/s)(96.9MiB/5001msec) 01:14:42.858 slat (nsec): min=5399, max=79092, avg=20267.21, stdev=12443.79 01:14:42.858 clat (usec): min=598, max=5688, avg=3153.88, stdev=677.20 01:14:42.858 lat (usec): min=608, max=5696, avg=3174.15, stdev=676.71 01:14:42.858 clat percentiles (usec): 01:14:42.858 | 1.00th=[ 1532], 5.00th=[ 1811], 10.00th=[ 2040], 20.00th=[ 2606], 01:14:42.858 | 30.00th=[ 3032], 40.00th=[ 3163], 50.00th=[ 3228], 60.00th=[ 3392], 01:14:42.858 | 70.00th=[ 3556], 80.00th=[ 3687], 90.00th=[ 3851], 95.00th=[ 4080], 01:14:42.858 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 5407], 99.95th=[ 5538], 01:14:42.858 | 99.99th=[ 5669] 01:14:42.858 bw ( KiB/s): min=19024, max=21088, per=23.20%, avg=19789.33, stdev=651.35, samples=9 01:14:42.858 iops : min= 2378, max= 2636, avg=2473.67, stdev=81.42, samples=9 01:14:42.858 lat (usec) : 750=0.02%, 1000=0.05% 01:14:42.858 lat (msec) : 2=8.91%, 4=84.36%, 10=6.66% 01:14:42.858 cpu : usr=95.14%, sys=4.16%, ctx=7, majf=0, minf=10 01:14:42.858 IO depths : 1=1.8%, 2=14.7%, 4=55.9%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:42.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:42.858 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:42.858 issued rwts: total=12405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:42.858 latency : target=0, window=0, percentile=100.00%, depth=8 01:14:42.858 filename1: (groupid=0, jobs=1): err= 0: pid=83173: Mon Dec 9 06:13:36 2024 01:14:42.858 read: IOPS=3059, BW=23.9MiB/s (25.1MB/s)(120MiB/5002msec) 01:14:42.858 slat (nsec): min=5655, max=67680, avg=13830.70, stdev=9258.99 01:14:42.858 clat (usec): min=490, max=5307, avg=2580.00, stdev=740.24 01:14:42.858 lat (usec): min=502, max=5348, avg=2593.83, stdev=741.61 01:14:42.858 clat percentiles (usec): 01:14:42.858 | 1.00th=[ 1012], 5.00th=[ 1418], 10.00th=[ 1614], 20.00th=[ 1729], 01:14:42.858 | 30.00th=[ 2073], 40.00th=[ 2442], 50.00th=[ 2737], 60.00th=[ 2868], 01:14:42.858 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3687], 01:14:42.858 | 99.00th=[ 3851], 99.50th=[ 4015], 99.90th=[ 4293], 99.95th=[ 4490], 01:14:42.858 | 99.99th=[ 4752] 01:14:42.858 bw ( KiB/s): min=22720, max=26480, per=28.76%, avg=24531.56, stdev=1247.58, samples=9 01:14:42.858 iops : min= 2840, max= 3310, avg=3066.44, stdev=155.95, samples=9 01:14:42.858 lat (usec) : 500=0.01%, 750=0.39%, 1000=0.57% 01:14:42.858 lat (msec) : 2=27.02%, 4=71.50%, 10=0.51% 01:14:42.858 cpu : usr=93.72%, sys=5.56%, ctx=7, majf=0, minf=0 01:14:42.858 IO depths : 1=0.4%, 2=2.1%, 4=62.7%, 8=34.9%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:42.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:42.858 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:42.858 issued rwts: total=15306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:42.858 latency : target=0, window=0, percentile=100.00%, depth=8 01:14:42.858 filename1: (groupid=0, jobs=1): err= 0: pid=83174: Mon Dec 9 06:13:36 2024 01:14:42.858 read: IOPS=2670, BW=20.9MiB/s (21.9MB/s)(104MiB/5002msec) 01:14:42.858 slat (nsec): min=5670, max=67680, avg=13344.40, stdev=9409.56 01:14:42.858 clat (usec): min=339, max=5505, avg=2955.59, stdev=721.05 01:14:42.858 lat (usec): min=348, max=5518, avg=2968.93, stdev=721.59 01:14:42.858 clat percentiles (usec): 01:14:42.858 | 1.00th=[ 1221], 5.00th=[ 1582], 10.00th=[ 1762], 20.00th=[ 2278], 01:14:42.858 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3130], 60.00th=[ 3261], 01:14:42.858 | 70.00th=[ 3359], 80.00th=[ 3556], 90.00th=[ 3752], 95.00th=[ 3851], 01:14:42.858 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 4686], 99.95th=[ 4752], 01:14:42.858 | 99.99th=[ 5342] 01:14:42.858 bw ( KiB/s): min=19520, max=24304, per=24.97%, avg=21296.00, stdev=1521.58, samples=9 01:14:42.858 iops : min= 2440, max= 3038, avg=2662.00, stdev=190.20, samples=9 01:14:42.858 lat (usec) : 500=0.03%, 750=0.02%, 1000=0.34% 01:14:42.859 lat (msec) : 2=14.39%, 4=82.52%, 10=2.70% 01:14:42.859 cpu : usr=93.54%, sys=5.78%, ctx=6, majf=0, minf=0 01:14:42.859 IO depths : 1=0.8%, 2=11.2%, 4=57.6%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:42.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:42.859 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:42.859 issued rwts: total=13358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:42.859 latency : target=0, window=0, percentile=100.00%, depth=8 01:14:42.859 01:14:42.859 Run status group 0 (all jobs): 01:14:42.859 READ: bw=83.3MiB/s (87.3MB/s), 19.1MiB/s-23.9MiB/s (20.1MB/s-25.1MB/s), io=417MiB (437MB), run=5001-5002msec 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 ************************************ 01:14:42.859 END TEST fio_dif_rand_params 01:14:42.859 ************************************ 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:42.859 01:14:42.859 real 0m23.726s 01:14:42.859 user 2m6.184s 01:14:42.859 sys 0m6.429s 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 06:13:36 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:14:42.859 06:13:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:42.859 06:13:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 ************************************ 01:14:42.859 START TEST fio_dif_digest 01:14:42.859 ************************************ 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 bdev_null0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:42.859 [2024-12-09 06:13:36.767043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:42.859 { 01:14:42.859 "params": { 01:14:42.859 "name": "Nvme$subsystem", 01:14:42.859 "trtype": "$TEST_TRANSPORT", 01:14:42.859 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:42.859 "adrfam": "ipv4", 01:14:42.859 "trsvcid": "$NVMF_PORT", 01:14:42.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:42.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:42.859 "hdgst": ${hdgst:-false}, 01:14:42.859 "ddgst": ${ddgst:-false} 01:14:42.859 }, 01:14:42.859 "method": "bdev_nvme_attach_controller" 01:14:42.859 } 01:14:42.859 EOF 01:14:42.859 )") 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:14:42.859 "params": { 01:14:42.859 "name": "Nvme0", 01:14:42.859 "trtype": "tcp", 01:14:42.859 "traddr": "10.0.0.3", 01:14:42.859 "adrfam": "ipv4", 01:14:42.859 "trsvcid": "4420", 01:14:42.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:42.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:42.859 "hdgst": true, 01:14:42.859 "ddgst": true 01:14:42.859 }, 01:14:42.859 "method": "bdev_nvme_attach_controller" 01:14:42.859 }' 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:42.859 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:42.860 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:14:42.860 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:42.860 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:42.860 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:42.860 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:14:42.860 06:13:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:42.860 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:14:42.860 ... 01:14:42.860 fio-3.35 01:14:42.860 Starting 3 threads 01:14:55.069 01:14:55.069 filename0: (groupid=0, jobs=1): err= 0: pid=83285: Mon Dec 9 06:13:47 2024 01:14:55.069 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(348MiB/10003msec) 01:14:55.069 slat (nsec): min=5848, max=42190, avg=13422.30, stdev=7601.83 01:14:55.069 clat (usec): min=4173, max=38153, avg=10744.74, stdev=1984.37 01:14:55.069 lat (usec): min=4183, max=38187, avg=10758.16, stdev=1984.91 01:14:55.069 clat percentiles (usec): 01:14:55.069 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10552], 20.00th=[10552], 01:14:55.069 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10552], 01:14:55.069 | 70.00th=[10683], 80.00th=[10683], 90.00th=[10683], 95.00th=[10683], 01:14:55.069 | 99.00th=[10945], 99.50th=[31065], 99.90th=[38011], 99.95th=[38011], 01:14:55.069 | 99.99th=[38011] 01:14:55.069 bw ( KiB/s): min=26112, max=36864, per=33.33%, avg=35610.95, stdev=2306.99, samples=19 01:14:55.069 iops : min= 204, max= 288, avg=278.21, stdev=18.02, samples=19 01:14:55.069 lat (msec) : 10=0.22%, 20=99.14%, 50=0.65% 01:14:55.069 cpu : usr=88.52%, sys=10.99%, ctx=129, majf=0, minf=0 01:14:55.069 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:55.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:55.069 issued rwts: total=2784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:55.069 latency : target=0, window=0, percentile=100.00%, depth=3 01:14:55.069 filename0: (groupid=0, jobs=1): err= 0: pid=83286: Mon Dec 9 06:13:47 2024 01:14:55.069 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(348MiB/10003msec) 01:14:55.069 slat (nsec): min=5622, max=56010, avg=12524.66, stdev=7642.51 01:14:55.069 clat (usec): min=3861, max=38289, avg=10746.89, stdev=2039.61 01:14:55.069 lat (usec): min=3868, max=38312, avg=10759.41, stdev=2039.53 01:14:55.069 clat percentiles (usec): 01:14:55.069 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10552], 20.00th=[10552], 01:14:55.069 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10552], 01:14:55.069 | 70.00th=[10683], 80.00th=[10683], 90.00th=[10683], 95.00th=[10683], 01:14:55.069 | 99.00th=[10945], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 01:14:55.069 | 99.99th=[38536] 01:14:55.069 bw ( KiB/s): min=25344, max=36864, per=33.33%, avg=35610.95, stdev=2497.94, samples=19 01:14:55.069 iops : min= 198, max= 288, avg=278.21, stdev=19.52, samples=19 01:14:55.069 lat (msec) : 4=0.11%, 10=0.11%, 20=99.14%, 50=0.65% 01:14:55.069 cpu : usr=94.84%, sys=4.63%, ctx=179, majf=0, minf=0 01:14:55.069 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:55.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:55.069 issued rwts: total=2784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:55.069 latency : target=0, window=0, percentile=100.00%, depth=3 01:14:55.069 filename0: (groupid=0, jobs=1): err= 0: pid=83287: Mon Dec 9 06:13:47 2024 01:14:55.069 read: IOPS=278, BW=34.8MiB/s (36.4MB/s)(348MiB/10001msec) 01:14:55.069 slat (nsec): min=5577, max=45609, avg=9344.73, stdev=3231.50 01:14:55.069 clat (usec): min=7784, max=38136, avg=10765.50, stdev=2004.02 01:14:55.069 lat (usec): min=7792, max=38153, avg=10774.84, stdev=2004.13 01:14:55.069 clat percentiles (usec): 01:14:55.069 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10552], 20.00th=[10552], 01:14:55.069 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10683], 01:14:55.069 | 70.00th=[10683], 80.00th=[10683], 90.00th=[10683], 95.00th=[10683], 01:14:55.069 | 99.00th=[10945], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 01:14:55.069 | 99.99th=[38011] 01:14:55.069 bw ( KiB/s): min=26112, max=36864, per=33.29%, avg=35570.53, stdev=2304.75, samples=19 01:14:55.069 iops : min= 204, max= 288, avg=277.89, stdev=18.01, samples=19 01:14:55.069 lat (msec) : 10=0.11%, 20=99.35%, 50=0.54% 01:14:55.069 cpu : usr=94.51%, sys=4.98%, ctx=25, majf=0, minf=0 01:14:55.069 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:55.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:55.069 issued rwts: total=2781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:55.069 latency : target=0, window=0, percentile=100.00%, depth=3 01:14:55.069 01:14:55.069 Run status group 0 (all jobs): 01:14:55.069 READ: bw=104MiB/s (109MB/s), 34.8MiB/s-34.8MiB/s (36.4MB/s-36.5MB/s), io=1044MiB (1094MB), run=10001-10003msec 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:55.070 ************************************ 01:14:55.070 END TEST fio_dif_digest 01:14:55.070 ************************************ 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:55.070 01:14:55.070 real 0m11.193s 01:14:55.070 user 0m28.566s 01:14:55.070 sys 0m2.445s 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:55.070 06:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:55.070 06:13:47 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:14:55.070 06:13:47 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:14:55.070 06:13:47 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 01:14:55.070 06:13:47 nvmf_dif -- nvmf/common.sh@121 -- # sync 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@124 -- # set +e 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:14:55.070 rmmod nvme_tcp 01:14:55.070 rmmod nvme_fabrics 01:14:55.070 rmmod nvme_keyring 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@128 -- # set -e 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@129 -- # return 0 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82515 ']' 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82515 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82515 ']' 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82515 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@959 -- # uname 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82515 01:14:55.070 killing process with pid 82515 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82515' 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82515 01:14:55.070 06:13:48 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82515 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:14:55.070 06:13:48 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:14:55.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:55.070 Waiting for block devices as requested 01:14:55.070 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:14:55.070 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@297 -- # iptr 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 01:14:55.070 06:13:49 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:55.070 06:13:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:14:55.070 06:13:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:55.330 06:13:49 nvmf_dif -- nvmf/common.sh@300 -- # return 0 01:14:55.330 01:14:55.330 real 1m1.721s 01:14:55.330 user 3m50.394s 01:14:55.330 sys 0m20.574s 01:14:55.330 06:13:49 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:55.330 06:13:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:14:55.330 ************************************ 01:14:55.330 END TEST nvmf_dif 01:14:55.330 ************************************ 01:14:55.330 06:13:49 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:14:55.330 06:13:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:55.330 06:13:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:55.330 06:13:49 -- common/autotest_common.sh@10 -- # set +x 01:14:55.330 ************************************ 01:14:55.330 START TEST nvmf_abort_qd_sizes 01:14:55.330 ************************************ 01:14:55.330 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:14:55.330 * Looking for test storage... 01:14:55.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:14:55.330 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:14:55.330 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 01:14:55.330 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:14:55.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:55.590 --rc genhtml_branch_coverage=1 01:14:55.590 --rc genhtml_function_coverage=1 01:14:55.590 --rc genhtml_legend=1 01:14:55.590 --rc geninfo_all_blocks=1 01:14:55.590 --rc geninfo_unexecuted_blocks=1 01:14:55.590 01:14:55.590 ' 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:14:55.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:55.590 --rc genhtml_branch_coverage=1 01:14:55.590 --rc genhtml_function_coverage=1 01:14:55.590 --rc genhtml_legend=1 01:14:55.590 --rc geninfo_all_blocks=1 01:14:55.590 --rc geninfo_unexecuted_blocks=1 01:14:55.590 01:14:55.590 ' 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:14:55.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:55.590 --rc genhtml_branch_coverage=1 01:14:55.590 --rc genhtml_function_coverage=1 01:14:55.590 --rc genhtml_legend=1 01:14:55.590 --rc geninfo_all_blocks=1 01:14:55.590 --rc geninfo_unexecuted_blocks=1 01:14:55.590 01:14:55.590 ' 01:14:55.590 06:13:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:14:55.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:55.590 --rc genhtml_branch_coverage=1 01:14:55.591 --rc genhtml_function_coverage=1 01:14:55.591 --rc genhtml_legend=1 01:14:55.591 --rc geninfo_all_blocks=1 01:14:55.591 --rc geninfo_unexecuted_blocks=1 01:14:55.591 01:14:55.591 ' 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:55.591 06:13:49 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:14:55.591 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:14:55.591 Cannot find device "nvmf_init_br" 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:14:55.591 Cannot find device "nvmf_init_br2" 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:14:55.591 Cannot find device "nvmf_tgt_br" 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:14:55.591 Cannot find device "nvmf_tgt_br2" 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:14:55.591 Cannot find device "nvmf_init_br" 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:14:55.591 Cannot find device "nvmf_init_br2" 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:14:55.591 Cannot find device "nvmf_tgt_br" 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:14:55.591 Cannot find device "nvmf_tgt_br2" 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 01:14:55.591 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:14:55.851 Cannot find device "nvmf_br" 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:14:55.851 Cannot find device "nvmf_init_if" 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:14:55.851 Cannot find device "nvmf_init_if2" 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:14:55.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:14:55.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:14:55.851 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:14:56.111 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:14:56.111 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 01:14:56.111 01:14:56.111 --- 10.0.0.3 ping statistics --- 01:14:56.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:56.111 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:14:56.111 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:14:56.111 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 01:14:56.111 01:14:56.111 --- 10.0.0.4 ping statistics --- 01:14:56.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:56.111 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:14:56.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:14:56.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 01:14:56.111 01:14:56.111 --- 10.0.0.1 ping statistics --- 01:14:56.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:56.111 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:14:56.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:14:56.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 01:14:56.111 01:14:56.111 --- 10.0.0.2 ping statistics --- 01:14:56.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:56.111 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:14:56.111 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:14:57.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:57.050 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:14:57.050 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=83954 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 83954 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 83954 ']' 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:57.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:57.311 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:57.311 [2024-12-09 06:13:51.749480] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:14:57.311 [2024-12-09 06:13:51.749545] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:14:57.570 [2024-12-09 06:13:51.904052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:14:57.570 [2024-12-09 06:13:51.964324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:14:57.570 [2024-12-09 06:13:51.964370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:14:57.570 [2024-12-09 06:13:51.964380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:14:57.570 [2024-12-09 06:13:51.964388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:14:57.570 [2024-12-09 06:13:51.964395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:14:57.570 [2024-12-09 06:13:51.965754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:14:57.570 [2024-12-09 06:13:51.965843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:14:57.570 [2024-12-09 06:13:51.966002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:57.571 [2024-12-09 06:13:51.966001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:14:57.571 [2024-12-09 06:13:52.037938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 01:14:58.139 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:14:58.140 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:58.399 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:58.399 ************************************ 01:14:58.399 START TEST spdk_target_abort 01:14:58.399 ************************************ 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:58.399 spdk_targetn1 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:58.399 [2024-12-09 06:13:52.813277] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:58.399 [2024-12-09 06:13:52.864438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:14:58.399 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:15:01.686 Initializing NVMe Controllers 01:15:01.686 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:15:01.686 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:15:01.686 Initialization complete. Launching workers. 01:15:01.686 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10999, failed: 0 01:15:01.686 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1050, failed to submit 9949 01:15:01.686 success 636, unsuccessful 414, failed 0 01:15:01.686 06:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:15:01.686 06:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:15:04.975 Initializing NVMe Controllers 01:15:04.975 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:15:04.975 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:15:04.975 Initialization complete. Launching workers. 01:15:04.975 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8989, failed: 0 01:15:04.975 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1154, failed to submit 7835 01:15:04.975 success 366, unsuccessful 788, failed 0 01:15:04.975 06:13:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:15:04.975 06:13:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:15:08.265 Initializing NVMe Controllers 01:15:08.265 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:15:08.265 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:15:08.265 Initialization complete. Launching workers. 01:15:08.265 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33411, failed: 0 01:15:08.265 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2389, failed to submit 31022 01:15:08.265 success 465, unsuccessful 1924, failed 0 01:15:08.265 06:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:15:08.265 06:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.265 06:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:15:08.265 06:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.265 06:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:15:08.265 06:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.265 06:14:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83954 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 83954 ']' 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 83954 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83954 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:08.833 killing process with pid 83954 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83954' 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 83954 01:15:08.833 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 83954 01:15:09.092 01:15:09.092 real 0m10.874s 01:15:09.092 user 0m42.620s 01:15:09.092 sys 0m2.600s 01:15:09.092 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:09.092 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:15:09.092 ************************************ 01:15:09.092 END TEST spdk_target_abort 01:15:09.092 ************************************ 01:15:09.352 06:14:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:15:09.352 06:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:09.352 06:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:09.352 06:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:15:09.352 ************************************ 01:15:09.352 START TEST kernel_target_abort 01:15:09.352 ************************************ 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:15:09.352 06:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:15:09.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:09.922 Waiting for block devices as requested 01:15:09.922 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:15:10.182 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:15:10.182 No valid GPT data, bailing 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:15:10.182 No valid GPT data, bailing 01:15:10.182 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:15:10.442 No valid GPT data, bailing 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:15:10.442 No valid GPT data, bailing 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:15:10.442 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 --hostid=bac40580-41f0-4da4-8cd9-1be4901a67b8 -a 10.0.0.1 -t tcp -s 4420 01:15:10.442 01:15:10.442 Discovery Log Number of Records 2, Generation counter 2 01:15:10.442 =====Discovery Log Entry 0====== 01:15:10.442 trtype: tcp 01:15:10.442 adrfam: ipv4 01:15:10.442 subtype: current discovery subsystem 01:15:10.442 treq: not specified, sq flow control disable supported 01:15:10.442 portid: 1 01:15:10.442 trsvcid: 4420 01:15:10.442 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:15:10.442 traddr: 10.0.0.1 01:15:10.442 eflags: none 01:15:10.442 sectype: none 01:15:10.442 =====Discovery Log Entry 1====== 01:15:10.442 trtype: tcp 01:15:10.442 adrfam: ipv4 01:15:10.442 subtype: nvme subsystem 01:15:10.442 treq: not specified, sq flow control disable supported 01:15:10.442 portid: 1 01:15:10.442 trsvcid: 4420 01:15:10.442 subnqn: nqn.2016-06.io.spdk:testnqn 01:15:10.442 traddr: 10.0.0.1 01:15:10.442 eflags: none 01:15:10.442 sectype: none 01:15:10.442 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:15:10.442 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:15:10.442 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:15:10.442 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:15:10.442 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:15:10.443 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:15:10.713 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:15:14.013 Initializing NVMe Controllers 01:15:14.013 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:15:14.013 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:15:14.013 Initialization complete. Launching workers. 01:15:14.013 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34919, failed: 0 01:15:14.013 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34919, failed to submit 0 01:15:14.013 success 0, unsuccessful 34919, failed 0 01:15:14.013 06:14:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:15:14.013 06:14:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:15:17.306 Initializing NVMe Controllers 01:15:17.306 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:15:17.306 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:15:17.306 Initialization complete. Launching workers. 01:15:17.306 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57642, failed: 0 01:15:17.306 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31443, failed to submit 26199 01:15:17.306 success 0, unsuccessful 31443, failed 0 01:15:17.306 06:14:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:15:17.306 06:14:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:15:20.653 Initializing NVMe Controllers 01:15:20.654 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:15:20.654 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:15:20.654 Initialization complete. Launching workers. 01:15:20.654 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92676, failed: 0 01:15:20.654 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23160, failed to submit 69516 01:15:20.654 success 0, unsuccessful 23160, failed 0 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:15:20.654 06:14:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:15:21.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:23.128 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:15:23.128 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:15:23.128 01:15:23.128 real 0m14.006s 01:15:23.128 user 0m5.963s 01:15:23.128 sys 0m5.022s 01:15:23.128 06:14:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:23.128 06:14:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:15:23.128 ************************************ 01:15:23.128 END TEST kernel_target_abort 01:15:23.128 ************************************ 01:15:23.386 06:14:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:15:23.386 06:14:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:15:23.386 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 01:15:23.386 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 01:15:23.386 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:15:23.386 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:15:23.387 rmmod nvme_tcp 01:15:23.387 rmmod nvme_fabrics 01:15:23.387 rmmod nvme_keyring 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 83954 ']' 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 83954 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 83954 ']' 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 83954 01:15:23.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83954) - No such process 01:15:23.387 Process with pid 83954 is not found 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 83954 is not found' 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:15:23.387 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:15:23.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:23.954 Waiting for block devices as requested 01:15:24.213 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:15:24.213 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:15:24.214 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:15:24.214 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:15:24.214 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 01:15:24.214 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 01:15:24.214 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:15:24.214 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 01:15:24.214 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:15:24.214 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:15:24.214 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:15:24.473 06:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:15:24.473 06:14:19 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 01:15:24.473 06:14:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:15:24.473 06:14:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:15:24.473 06:14:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:15:24.732 06:14:19 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 01:15:24.732 01:15:24.732 real 0m29.337s 01:15:24.732 user 0m50.069s 01:15:24.732 sys 0m9.798s 01:15:24.732 06:14:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:24.732 06:14:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:15:24.732 ************************************ 01:15:24.732 END TEST nvmf_abort_qd_sizes 01:15:24.732 ************************************ 01:15:24.732 06:14:19 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:15:24.732 06:14:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:24.732 06:14:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:24.732 06:14:19 -- common/autotest_common.sh@10 -- # set +x 01:15:24.732 ************************************ 01:15:24.732 START TEST keyring_file 01:15:24.732 ************************************ 01:15:24.732 06:14:19 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:15:24.732 * Looking for test storage... 01:15:24.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:15:24.732 06:14:19 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:15:24.732 06:14:19 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 01:15:24.732 06:14:19 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:15:24.993 06:14:19 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@344 -- # case "$op" in 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@345 -- # : 1 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@365 -- # decimal 1 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@353 -- # local d=1 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@355 -- # echo 1 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@366 -- # decimal 2 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@353 -- # local d=2 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@355 -- # echo 2 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@368 -- # return 0 01:15:24.993 06:14:19 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:24.993 06:14:19 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:15:24.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:24.993 --rc genhtml_branch_coverage=1 01:15:24.993 --rc genhtml_function_coverage=1 01:15:24.993 --rc genhtml_legend=1 01:15:24.993 --rc geninfo_all_blocks=1 01:15:24.993 --rc geninfo_unexecuted_blocks=1 01:15:24.993 01:15:24.993 ' 01:15:24.993 06:14:19 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:15:24.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:24.993 --rc genhtml_branch_coverage=1 01:15:24.993 --rc genhtml_function_coverage=1 01:15:24.993 --rc genhtml_legend=1 01:15:24.993 --rc geninfo_all_blocks=1 01:15:24.993 --rc geninfo_unexecuted_blocks=1 01:15:24.993 01:15:24.993 ' 01:15:24.993 06:14:19 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:15:24.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:24.993 --rc genhtml_branch_coverage=1 01:15:24.993 --rc genhtml_function_coverage=1 01:15:24.993 --rc genhtml_legend=1 01:15:24.993 --rc geninfo_all_blocks=1 01:15:24.993 --rc geninfo_unexecuted_blocks=1 01:15:24.993 01:15:24.993 ' 01:15:24.993 06:14:19 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:15:24.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:24.993 --rc genhtml_branch_coverage=1 01:15:24.993 --rc genhtml_function_coverage=1 01:15:24.993 --rc genhtml_legend=1 01:15:24.993 --rc geninfo_all_blocks=1 01:15:24.993 --rc geninfo_unexecuted_blocks=1 01:15:24.993 01:15:24.993 ' 01:15:24.993 06:14:19 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:15:24.993 06:14:19 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:24.993 06:14:19 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:24.993 06:14:19 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:24.994 06:14:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:24.994 06:14:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:24.994 06:14:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:24.994 06:14:19 keyring_file -- paths/export.sh@5 -- # export PATH 01:15:24.994 06:14:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@51 -- # : 0 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:15:24.994 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:15:24.994 06:14:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:15:24.994 06:14:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:15:24.994 06:14:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:15:24.994 06:14:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:15:24.994 06:14:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:15:24.994 06:14:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@17 -- # name=key0 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@17 -- # digest=0 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@18 -- # mktemp 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gTe0lgrBbr 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@733 -- # python - 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gTe0lgrBbr 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gTe0lgrBbr 01:15:24.994 06:14:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.gTe0lgrBbr 01:15:24.994 06:14:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@17 -- # name=key1 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@17 -- # digest=0 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@18 -- # mktemp 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2t9h1a6OBh 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:15:24.994 06:14:19 keyring_file -- nvmf/common.sh@733 -- # python - 01:15:24.994 06:14:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2t9h1a6OBh 01:15:25.253 06:14:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2t9h1a6OBh 01:15:25.253 06:14:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.2t9h1a6OBh 01:15:25.253 06:14:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=84889 01:15:25.253 06:14:19 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:25.253 06:14:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84889 01:15:25.253 06:14:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84889 ']' 01:15:25.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:25.253 06:14:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:25.253 06:14:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:25.253 06:14:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:25.253 06:14:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:25.253 06:14:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:15:25.253 [2024-12-09 06:14:19.648391] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:15:25.253 [2024-12-09 06:14:19.649202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84889 ] 01:15:25.253 [2024-12-09 06:14:19.802815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:25.513 [2024-12-09 06:14:19.863984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:25.513 [2024-12-09 06:14:19.961581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:15:26.082 06:14:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:15:26.082 [2024-12-09 06:14:20.503958] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:26.082 null0 01:15:26.082 [2024-12-09 06:14:20.535880] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:15:26.082 [2024-12-09 06:14:20.536342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:26.082 06:14:20 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:15:26.082 [2024-12-09 06:14:20.567822] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:15:26.082 request: 01:15:26.082 { 01:15:26.082 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:15:26.082 "secure_channel": false, 01:15:26.082 "listen_address": { 01:15:26.082 "trtype": "tcp", 01:15:26.082 "traddr": "127.0.0.1", 01:15:26.082 "trsvcid": "4420" 01:15:26.082 }, 01:15:26.082 "method": "nvmf_subsystem_add_listener", 01:15:26.082 "req_id": 1 01:15:26.082 } 01:15:26.082 Got JSON-RPC error response 01:15:26.082 response: 01:15:26.082 { 01:15:26.082 "code": -32602, 01:15:26.082 "message": "Invalid parameters" 01:15:26.082 } 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:26.082 06:14:20 keyring_file -- keyring/file.sh@47 -- # bperfpid=84900 01:15:26.082 06:14:20 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:15:26.082 06:14:20 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84900 /var/tmp/bperf.sock 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84900 ']' 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:15:26.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:26.082 06:14:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:15:26.082 [2024-12-09 06:14:20.635634] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:15:26.082 [2024-12-09 06:14:20.635702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84900 ] 01:15:26.342 [2024-12-09 06:14:20.786352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:26.342 [2024-12-09 06:14:20.843304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:26.342 [2024-12-09 06:14:20.914966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:26.911 06:14:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:26.911 06:14:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:15:26.911 06:14:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gTe0lgrBbr 01:15:26.911 06:14:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gTe0lgrBbr 01:15:27.171 06:14:21 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.2t9h1a6OBh 01:15:27.171 06:14:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.2t9h1a6OBh 01:15:27.430 06:14:21 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:15:27.430 06:14:21 keyring_file -- keyring/file.sh@52 -- # get_key key0 01:15:27.430 06:14:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:27.430 06:14:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:27.430 06:14:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:27.690 06:14:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.gTe0lgrBbr == \/\t\m\p\/\t\m\p\.\g\T\e\0\l\g\r\B\b\r ]] 01:15:27.690 06:14:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 01:15:27.690 06:14:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 01:15:27.690 06:14:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:27.690 06:14:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:15:27.690 06:14:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:27.950 06:14:22 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.2t9h1a6OBh == \/\t\m\p\/\t\m\p\.\2\t\9\h\1\a\6\O\B\h ]] 01:15:27.950 06:14:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:27.950 06:14:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:15:27.950 06:14:22 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:15:27.950 06:14:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:28.210 06:14:22 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 01:15:28.210 06:14:22 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:28.210 06:14:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:28.469 [2024-12-09 06:14:22.920951] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:28.469 nvme0n1 01:15:28.469 06:14:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 01:15:28.469 06:14:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:28.469 06:14:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:15:28.469 06:14:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:28.469 06:14:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:28.469 06:14:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:28.729 06:14:23 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 01:15:28.729 06:14:23 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 01:15:28.729 06:14:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:15:28.729 06:14:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:28.729 06:14:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:28.729 06:14:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:28.729 06:14:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:15:28.989 06:14:23 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 01:15:28.989 06:14:23 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:15:28.989 Running I/O for 1 seconds... 01:15:30.369 17118.00 IOPS, 66.87 MiB/s 01:15:30.369 Latency(us) 01:15:30.369 [2024-12-09T06:14:24.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:30.369 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:15:30.369 nvme0n1 : 1.01 17114.97 66.86 0.00 0.00 7452.25 4632.26 22003.25 01:15:30.369 [2024-12-09T06:14:24.956Z] =================================================================================================================== 01:15:30.369 [2024-12-09T06:14:24.956Z] Total : 17114.97 66.86 0.00 0.00 7452.25 4632.26 22003.25 01:15:30.369 { 01:15:30.369 "results": [ 01:15:30.369 { 01:15:30.369 "job": "nvme0n1", 01:15:30.369 "core_mask": "0x2", 01:15:30.369 "workload": "randrw", 01:15:30.369 "percentage": 50, 01:15:30.369 "status": "finished", 01:15:30.369 "queue_depth": 128, 01:15:30.369 "io_size": 4096, 01:15:30.369 "runtime": 1.007773, 01:15:30.369 "iops": 17114.965374146755, 01:15:30.369 "mibps": 66.85533349276076, 01:15:30.369 "io_failed": 0, 01:15:30.369 "io_timeout": 0, 01:15:30.369 "avg_latency_us": 7452.25344569372, 01:15:30.369 "min_latency_us": 4632.263453815261, 01:15:30.369 "max_latency_us": 22003.25140562249 01:15:30.369 } 01:15:30.369 ], 01:15:30.369 "core_count": 1 01:15:30.369 } 01:15:30.369 06:14:24 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:15:30.369 06:14:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:15:30.369 06:14:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 01:15:30.369 06:14:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:15:30.369 06:14:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:30.369 06:14:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:30.369 06:14:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:30.369 06:14:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:30.634 06:14:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:15:30.634 06:14:24 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 01:15:30.634 06:14:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:15:30.634 06:14:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:30.634 06:14:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:30.634 06:14:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:15:30.635 06:14:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:30.635 06:14:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 01:15:30.635 06:14:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:15:30.635 06:14:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:15:30.635 06:14:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:15:30.635 06:14:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:15:30.635 06:14:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:30.635 06:14:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:15:30.635 06:14:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:30.635 06:14:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:15:30.635 06:14:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:15:30.896 [2024-12-09 06:14:25.362633] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:15:30.896 [2024-12-09 06:14:25.363330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b65d0 (107): Transport endpoint is not connected 01:15:30.896 [2024-12-09 06:14:25.364319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b65d0 (9): Bad file descriptor 01:15:30.896 [2024-12-09 06:14:25.365317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:15:30.896 [2024-12-09 06:14:25.365340] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:15:30.896 [2024-12-09 06:14:25.365350] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:15:30.896 [2024-12-09 06:14:25.365360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:15:30.896 request: 01:15:30.896 { 01:15:30.896 "name": "nvme0", 01:15:30.896 "trtype": "tcp", 01:15:30.896 "traddr": "127.0.0.1", 01:15:30.896 "adrfam": "ipv4", 01:15:30.896 "trsvcid": "4420", 01:15:30.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:15:30.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:15:30.896 "prchk_reftag": false, 01:15:30.896 "prchk_guard": false, 01:15:30.896 "hdgst": false, 01:15:30.896 "ddgst": false, 01:15:30.896 "psk": "key1", 01:15:30.896 "allow_unrecognized_csi": false, 01:15:30.896 "method": "bdev_nvme_attach_controller", 01:15:30.896 "req_id": 1 01:15:30.896 } 01:15:30.896 Got JSON-RPC error response 01:15:30.896 response: 01:15:30.896 { 01:15:30.896 "code": -5, 01:15:30.896 "message": "Input/output error" 01:15:30.896 } 01:15:30.896 06:14:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:15:30.896 06:14:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:30.896 06:14:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:30.896 06:14:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:30.896 06:14:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 01:15:30.896 06:14:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:30.896 06:14:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:15:30.896 06:14:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:30.896 06:14:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:30.896 06:14:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:31.154 06:14:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:15:31.154 06:14:25 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 01:15:31.154 06:14:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:15:31.154 06:14:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:15:31.154 06:14:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:31.154 06:14:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:31.154 06:14:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:31.412 06:14:25 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 01:15:31.412 06:14:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 01:15:31.412 06:14:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:15:31.412 06:14:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 01:15:31.412 06:14:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:15:31.671 06:14:26 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 01:15:31.671 06:14:26 keyring_file -- keyring/file.sh@78 -- # jq length 01:15:31.671 06:14:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:31.929 06:14:26 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 01:15:31.929 06:14:26 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.gTe0lgrBbr 01:15:31.929 06:14:26 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.gTe0lgrBbr 01:15:31.929 06:14:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:15:31.929 06:14:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.gTe0lgrBbr 01:15:31.929 06:14:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:15:31.929 06:14:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:31.929 06:14:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:15:31.929 06:14:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:31.929 06:14:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gTe0lgrBbr 01:15:31.929 06:14:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gTe0lgrBbr 01:15:32.187 [2024-12-09 06:14:26.585849] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gTe0lgrBbr': 0100660 01:15:32.187 [2024-12-09 06:14:26.585877] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:15:32.187 request: 01:15:32.187 { 01:15:32.187 "name": "key0", 01:15:32.187 "path": "/tmp/tmp.gTe0lgrBbr", 01:15:32.187 "method": "keyring_file_add_key", 01:15:32.187 "req_id": 1 01:15:32.187 } 01:15:32.187 Got JSON-RPC error response 01:15:32.187 response: 01:15:32.187 { 01:15:32.187 "code": -1, 01:15:32.187 "message": "Operation not permitted" 01:15:32.187 } 01:15:32.187 06:14:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:15:32.187 06:14:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:32.187 06:14:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:32.187 06:14:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:32.187 06:14:26 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.gTe0lgrBbr 01:15:32.187 06:14:26 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gTe0lgrBbr 01:15:32.187 06:14:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gTe0lgrBbr 01:15:32.446 06:14:26 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.gTe0lgrBbr 01:15:32.446 06:14:26 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 01:15:32.446 06:14:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:15:32.446 06:14:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:32.446 06:14:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:32.446 06:14:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:32.446 06:14:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:32.446 06:14:27 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 01:15:32.446 06:14:27 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:32.446 06:14:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:15:32.446 06:14:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:32.446 06:14:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:15:32.446 06:14:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:32.446 06:14:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:15:32.446 06:14:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:32.446 06:14:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:32.446 06:14:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:32.704 [2024-12-09 06:14:27.204983] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.gTe0lgrBbr': No such file or directory 01:15:32.704 [2024-12-09 06:14:27.205163] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:15:32.704 [2024-12-09 06:14:27.205185] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:15:32.704 [2024-12-09 06:14:27.205195] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 01:15:32.704 [2024-12-09 06:14:27.205206] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:15:32.704 [2024-12-09 06:14:27.205214] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:15:32.704 request: 01:15:32.704 { 01:15:32.704 "name": "nvme0", 01:15:32.704 "trtype": "tcp", 01:15:32.704 "traddr": "127.0.0.1", 01:15:32.704 "adrfam": "ipv4", 01:15:32.704 "trsvcid": "4420", 01:15:32.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:15:32.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:15:32.704 "prchk_reftag": false, 01:15:32.704 "prchk_guard": false, 01:15:32.704 "hdgst": false, 01:15:32.704 "ddgst": false, 01:15:32.704 "psk": "key0", 01:15:32.704 "allow_unrecognized_csi": false, 01:15:32.704 "method": "bdev_nvme_attach_controller", 01:15:32.704 "req_id": 1 01:15:32.704 } 01:15:32.704 Got JSON-RPC error response 01:15:32.704 response: 01:15:32.704 { 01:15:32.704 "code": -19, 01:15:32.704 "message": "No such device" 01:15:32.704 } 01:15:32.704 06:14:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:15:32.704 06:14:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:32.704 06:14:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:32.704 06:14:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:32.704 06:14:27 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 01:15:32.704 06:14:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:15:32.963 06:14:27 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@17 -- # name=key0 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@17 -- # digest=0 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@18 -- # mktemp 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6MSdIzPd1L 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:15:32.963 06:14:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:15:32.963 06:14:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:15:32.963 06:14:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:15:32.963 06:14:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:15:32.963 06:14:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:15:32.963 06:14:27 keyring_file -- nvmf/common.sh@733 -- # python - 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6MSdIzPd1L 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6MSdIzPd1L 01:15:32.963 06:14:27 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.6MSdIzPd1L 01:15:32.963 06:14:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6MSdIzPd1L 01:15:32.963 06:14:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6MSdIzPd1L 01:15:33.221 06:14:27 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:33.221 06:14:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:33.480 nvme0n1 01:15:33.480 06:14:27 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 01:15:33.480 06:14:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:15:33.480 06:14:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:33.480 06:14:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:33.480 06:14:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:33.480 06:14:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:33.739 06:14:28 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 01:15:33.739 06:14:28 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 01:15:33.739 06:14:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:15:33.997 06:14:28 keyring_file -- keyring/file.sh@102 -- # get_key key0 01:15:33.997 06:14:28 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 01:15:33.997 06:14:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:33.997 06:14:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:33.997 06:14:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:33.997 06:14:28 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 01:15:33.997 06:14:28 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 01:15:33.997 06:14:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:33.997 06:14:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:15:33.997 06:14:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:33.997 06:14:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:33.997 06:14:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:34.255 06:14:28 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 01:15:34.255 06:14:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:15:34.255 06:14:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:15:34.512 06:14:28 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 01:15:34.512 06:14:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:34.512 06:14:29 keyring_file -- keyring/file.sh@105 -- # jq length 01:15:34.770 06:14:29 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 01:15:34.770 06:14:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6MSdIzPd1L 01:15:34.770 06:14:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6MSdIzPd1L 01:15:35.028 06:14:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.2t9h1a6OBh 01:15:35.028 06:14:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.2t9h1a6OBh 01:15:35.286 06:14:29 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:35.286 06:14:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:15:35.545 nvme0n1 01:15:35.545 06:14:29 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 01:15:35.545 06:14:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:15:35.804 06:14:30 keyring_file -- keyring/file.sh@113 -- # config='{ 01:15:35.804 "subsystems": [ 01:15:35.804 { 01:15:35.804 "subsystem": "keyring", 01:15:35.804 "config": [ 01:15:35.804 { 01:15:35.804 "method": "keyring_file_add_key", 01:15:35.804 "params": { 01:15:35.804 "name": "key0", 01:15:35.804 "path": "/tmp/tmp.6MSdIzPd1L" 01:15:35.804 } 01:15:35.804 }, 01:15:35.804 { 01:15:35.804 "method": "keyring_file_add_key", 01:15:35.804 "params": { 01:15:35.804 "name": "key1", 01:15:35.804 "path": "/tmp/tmp.2t9h1a6OBh" 01:15:35.804 } 01:15:35.804 } 01:15:35.804 ] 01:15:35.804 }, 01:15:35.804 { 01:15:35.804 "subsystem": "iobuf", 01:15:35.804 "config": [ 01:15:35.804 { 01:15:35.804 "method": "iobuf_set_options", 01:15:35.804 "params": { 01:15:35.804 "small_pool_count": 8192, 01:15:35.804 "large_pool_count": 1024, 01:15:35.804 "small_bufsize": 8192, 01:15:35.804 "large_bufsize": 135168, 01:15:35.804 "enable_numa": false 01:15:35.804 } 01:15:35.804 } 01:15:35.804 ] 01:15:35.804 }, 01:15:35.804 { 01:15:35.804 "subsystem": "sock", 01:15:35.804 "config": [ 01:15:35.804 { 01:15:35.804 "method": "sock_set_default_impl", 01:15:35.804 "params": { 01:15:35.804 "impl_name": "uring" 01:15:35.804 } 01:15:35.804 }, 01:15:35.804 { 01:15:35.804 "method": "sock_impl_set_options", 01:15:35.804 "params": { 01:15:35.804 "impl_name": "ssl", 01:15:35.804 "recv_buf_size": 4096, 01:15:35.804 "send_buf_size": 4096, 01:15:35.804 "enable_recv_pipe": true, 01:15:35.804 "enable_quickack": false, 01:15:35.804 "enable_placement_id": 0, 01:15:35.804 "enable_zerocopy_send_server": true, 01:15:35.804 "enable_zerocopy_send_client": false, 01:15:35.804 "zerocopy_threshold": 0, 01:15:35.804 "tls_version": 0, 01:15:35.804 "enable_ktls": false 01:15:35.804 } 01:15:35.804 }, 01:15:35.804 { 01:15:35.804 "method": "sock_impl_set_options", 01:15:35.804 "params": { 01:15:35.804 "impl_name": "posix", 01:15:35.804 "recv_buf_size": 2097152, 01:15:35.804 "send_buf_size": 2097152, 01:15:35.804 "enable_recv_pipe": true, 01:15:35.804 "enable_quickack": false, 01:15:35.804 "enable_placement_id": 0, 01:15:35.804 "enable_zerocopy_send_server": true, 01:15:35.804 "enable_zerocopy_send_client": false, 01:15:35.804 "zerocopy_threshold": 0, 01:15:35.804 "tls_version": 0, 01:15:35.804 "enable_ktls": false 01:15:35.804 } 01:15:35.804 }, 01:15:35.804 { 01:15:35.804 "method": "sock_impl_set_options", 01:15:35.804 "params": { 01:15:35.804 "impl_name": "uring", 01:15:35.804 "recv_buf_size": 2097152, 01:15:35.804 "send_buf_size": 2097152, 01:15:35.804 "enable_recv_pipe": true, 01:15:35.804 "enable_quickack": false, 01:15:35.804 "enable_placement_id": 0, 01:15:35.804 "enable_zerocopy_send_server": false, 01:15:35.804 "enable_zerocopy_send_client": false, 01:15:35.804 "zerocopy_threshold": 0, 01:15:35.804 "tls_version": 0, 01:15:35.804 "enable_ktls": false 01:15:35.804 } 01:15:35.804 } 01:15:35.804 ] 01:15:35.804 }, 01:15:35.804 { 01:15:35.804 "subsystem": "vmd", 01:15:35.804 "config": [] 01:15:35.804 }, 01:15:35.804 { 01:15:35.804 "subsystem": "accel", 01:15:35.804 "config": [ 01:15:35.805 { 01:15:35.805 "method": "accel_set_options", 01:15:35.805 "params": { 01:15:35.805 "small_cache_size": 128, 01:15:35.805 "large_cache_size": 16, 01:15:35.805 "task_count": 2048, 01:15:35.805 "sequence_count": 2048, 01:15:35.805 "buf_count": 2048 01:15:35.805 } 01:15:35.805 } 01:15:35.805 ] 01:15:35.805 }, 01:15:35.805 { 01:15:35.805 "subsystem": "bdev", 01:15:35.805 "config": [ 01:15:35.805 { 01:15:35.805 "method": "bdev_set_options", 01:15:35.805 "params": { 01:15:35.805 "bdev_io_pool_size": 65535, 01:15:35.805 "bdev_io_cache_size": 256, 01:15:35.805 "bdev_auto_examine": true, 01:15:35.805 "iobuf_small_cache_size": 128, 01:15:35.805 "iobuf_large_cache_size": 16 01:15:35.805 } 01:15:35.805 }, 01:15:35.805 { 01:15:35.805 "method": "bdev_raid_set_options", 01:15:35.805 "params": { 01:15:35.805 "process_window_size_kb": 1024, 01:15:35.805 "process_max_bandwidth_mb_sec": 0 01:15:35.805 } 01:15:35.805 }, 01:15:35.805 { 01:15:35.805 "method": "bdev_iscsi_set_options", 01:15:35.805 "params": { 01:15:35.805 "timeout_sec": 30 01:15:35.805 } 01:15:35.805 }, 01:15:35.805 { 01:15:35.805 "method": "bdev_nvme_set_options", 01:15:35.805 "params": { 01:15:35.805 "action_on_timeout": "none", 01:15:35.805 "timeout_us": 0, 01:15:35.805 "timeout_admin_us": 0, 01:15:35.805 "keep_alive_timeout_ms": 10000, 01:15:35.805 "arbitration_burst": 0, 01:15:35.805 "low_priority_weight": 0, 01:15:35.805 "medium_priority_weight": 0, 01:15:35.805 "high_priority_weight": 0, 01:15:35.805 "nvme_adminq_poll_period_us": 10000, 01:15:35.805 "nvme_ioq_poll_period_us": 0, 01:15:35.805 "io_queue_requests": 512, 01:15:35.805 "delay_cmd_submit": true, 01:15:35.805 "transport_retry_count": 4, 01:15:35.805 "bdev_retry_count": 3, 01:15:35.805 "transport_ack_timeout": 0, 01:15:35.805 "ctrlr_loss_timeout_sec": 0, 01:15:35.805 "reconnect_delay_sec": 0, 01:15:35.805 "fast_io_fail_timeout_sec": 0, 01:15:35.805 "disable_auto_failback": false, 01:15:35.805 "generate_uuids": false, 01:15:35.805 "transport_tos": 0, 01:15:35.805 "nvme_error_stat": false, 01:15:35.805 "rdma_srq_size": 0, 01:15:35.805 "io_path_stat": false, 01:15:35.805 "allow_accel_sequence": false, 01:15:35.805 "rdma_max_cq_size": 0, 01:15:35.805 "rdma_cm_event_timeout_ms": 0, 01:15:35.805 "dhchap_digests": [ 01:15:35.805 "sha256", 01:15:35.805 "sha384", 01:15:35.805 "sha512" 01:15:35.805 ], 01:15:35.805 "dhchap_dhgroups": [ 01:15:35.805 "null", 01:15:35.805 "ffdhe2048", 01:15:35.805 "ffdhe3072", 01:15:35.805 "ffdhe4096", 01:15:35.805 "ffdhe6144", 01:15:35.805 "ffdhe8192" 01:15:35.805 ] 01:15:35.805 } 01:15:35.805 }, 01:15:35.805 { 01:15:35.805 "method": "bdev_nvme_attach_controller", 01:15:35.805 "params": { 01:15:35.805 "name": "nvme0", 01:15:35.805 "trtype": "TCP", 01:15:35.805 "adrfam": "IPv4", 01:15:35.805 "traddr": "127.0.0.1", 01:15:35.805 "trsvcid": "4420", 01:15:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:15:35.805 "prchk_reftag": false, 01:15:35.805 "prchk_guard": false, 01:15:35.805 "ctrlr_loss_timeout_sec": 0, 01:15:35.805 "reconnect_delay_sec": 0, 01:15:35.805 "fast_io_fail_timeout_sec": 0, 01:15:35.805 "psk": "key0", 01:15:35.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:15:35.805 "hdgst": false, 01:15:35.805 "ddgst": false, 01:15:35.805 "multipath": "multipath" 01:15:35.805 } 01:15:35.805 }, 01:15:35.805 { 01:15:35.805 "method": "bdev_nvme_set_hotplug", 01:15:35.805 "params": { 01:15:35.805 "period_us": 100000, 01:15:35.805 "enable": false 01:15:35.805 } 01:15:35.805 }, 01:15:35.805 { 01:15:35.805 "method": "bdev_wait_for_examine" 01:15:35.805 } 01:15:35.805 ] 01:15:35.805 }, 01:15:35.805 { 01:15:35.805 "subsystem": "nbd", 01:15:35.805 "config": [] 01:15:35.805 } 01:15:35.805 ] 01:15:35.805 }' 01:15:35.805 06:14:30 keyring_file -- keyring/file.sh@115 -- # killprocess 84900 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84900 ']' 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84900 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@959 -- # uname 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84900 01:15:35.805 killing process with pid 84900 01:15:35.805 Received shutdown signal, test time was about 1.000000 seconds 01:15:35.805 01:15:35.805 Latency(us) 01:15:35.805 [2024-12-09T06:14:30.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:35.805 [2024-12-09T06:14:30.392Z] =================================================================================================================== 01:15:35.805 [2024-12-09T06:14:30.392Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84900' 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@973 -- # kill 84900 01:15:35.805 06:14:30 keyring_file -- common/autotest_common.sh@978 -- # wait 84900 01:15:36.065 06:14:30 keyring_file -- keyring/file.sh@118 -- # bperfpid=85139 01:15:36.065 06:14:30 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85139 /var/tmp/bperf.sock 01:15:36.065 06:14:30 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85139 ']' 01:15:36.065 06:14:30 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:15:36.065 06:14:30 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:36.065 06:14:30 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:15:36.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:15:36.065 06:14:30 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:15:36.065 06:14:30 keyring_file -- keyring/file.sh@116 -- # echo '{ 01:15:36.065 "subsystems": [ 01:15:36.065 { 01:15:36.065 "subsystem": "keyring", 01:15:36.065 "config": [ 01:15:36.065 { 01:15:36.065 "method": "keyring_file_add_key", 01:15:36.065 "params": { 01:15:36.065 "name": "key0", 01:15:36.065 "path": "/tmp/tmp.6MSdIzPd1L" 01:15:36.065 } 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "method": "keyring_file_add_key", 01:15:36.065 "params": { 01:15:36.065 "name": "key1", 01:15:36.065 "path": "/tmp/tmp.2t9h1a6OBh" 01:15:36.065 } 01:15:36.065 } 01:15:36.065 ] 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "subsystem": "iobuf", 01:15:36.065 "config": [ 01:15:36.065 { 01:15:36.065 "method": "iobuf_set_options", 01:15:36.065 "params": { 01:15:36.065 "small_pool_count": 8192, 01:15:36.065 "large_pool_count": 1024, 01:15:36.065 "small_bufsize": 8192, 01:15:36.065 "large_bufsize": 135168, 01:15:36.065 "enable_numa": false 01:15:36.065 } 01:15:36.065 } 01:15:36.065 ] 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "subsystem": "sock", 01:15:36.065 "config": [ 01:15:36.065 { 01:15:36.065 "method": "sock_set_default_impl", 01:15:36.065 "params": { 01:15:36.065 "impl_name": "uring" 01:15:36.065 } 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "method": "sock_impl_set_options", 01:15:36.065 "params": { 01:15:36.065 "impl_name": "ssl", 01:15:36.065 "recv_buf_size": 4096, 01:15:36.065 "send_buf_size": 4096, 01:15:36.065 "enable_recv_pipe": true, 01:15:36.065 "enable_quickack": false, 01:15:36.065 "enable_placement_id": 0, 01:15:36.065 "enable_zerocopy_send_server": true, 01:15:36.065 "enable_zerocopy_send_client": false, 01:15:36.065 "zerocopy_threshold": 0, 01:15:36.065 "tls_version": 0, 01:15:36.065 "enable_ktls": false 01:15:36.065 } 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "method": "sock_impl_set_options", 01:15:36.065 "params": { 01:15:36.065 "impl_name": "posix", 01:15:36.065 "recv_buf_size": 2097152, 01:15:36.065 "send_buf_size": 2097152, 01:15:36.065 "enable_recv_pipe": true, 01:15:36.065 "enable_quickack": false, 01:15:36.065 "enable_placement_id": 0, 01:15:36.065 "enable_zerocopy_send_server": true, 01:15:36.065 "enable_zerocopy_send_client": false, 01:15:36.065 "zerocopy_threshold": 0, 01:15:36.065 "tls_version": 0, 01:15:36.065 "enable_ktls": false 01:15:36.065 } 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "method": "sock_impl_set_options", 01:15:36.065 "params": { 01:15:36.065 "impl_name": "uring", 01:15:36.065 "recv_buf_size": 2097152, 01:15:36.065 "send_buf_size": 2097152, 01:15:36.065 "enable_recv_pipe": true, 01:15:36.065 "enable_quickack": false, 01:15:36.065 "enable_placement_id": 0, 01:15:36.065 "enable_zerocopy_send_server": false, 01:15:36.065 "enable_zerocopy_send_client": false, 01:15:36.065 "zerocopy_threshold": 0, 01:15:36.065 "tls_version": 0, 01:15:36.065 "enable_ktls": false 01:15:36.065 } 01:15:36.065 } 01:15:36.065 ] 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "subsystem": "vmd", 01:15:36.065 "config": [] 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "subsystem": "accel", 01:15:36.065 "config": [ 01:15:36.065 { 01:15:36.065 "method": "accel_set_options", 01:15:36.065 "params": { 01:15:36.065 "small_cache_size": 128, 01:15:36.065 "large_cache_size": 16, 01:15:36.065 "task_count": 2048, 01:15:36.065 "sequence_count": 2048, 01:15:36.065 "buf_count": 2048 01:15:36.065 } 01:15:36.065 } 01:15:36.065 ] 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "subsystem": "bdev", 01:15:36.065 "config": [ 01:15:36.065 { 01:15:36.065 "method": "bdev_set_options", 01:15:36.065 "params": { 01:15:36.065 "bdev_io_pool_size": 65535, 01:15:36.065 "bdev_io_cache_size": 256, 01:15:36.065 "bdev_auto_examine": true, 01:15:36.065 "iobuf_small_cache_size": 128, 01:15:36.065 "iobuf_large_cache_size": 16 01:15:36.065 } 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "method": "bdev_raid_set_options", 01:15:36.065 "params": { 01:15:36.065 "process_window_size_kb": 1024, 01:15:36.065 "process_max_bandwidth_mb_sec": 0 01:15:36.065 } 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "method": "bdev_iscsi_set_options", 01:15:36.065 "params": { 01:15:36.065 "timeout_sec": 30 01:15:36.065 } 01:15:36.065 }, 01:15:36.065 { 01:15:36.065 "method": "bdev_nvme_set_options", 01:15:36.065 "params": { 01:15:36.065 "action_on_timeout": "none", 01:15:36.065 "timeout_us": 0, 01:15:36.065 "timeout_admin_us": 0, 01:15:36.065 "keep_alive_timeout_ms": 10000, 01:15:36.065 "arbitration_burst": 0, 01:15:36.065 "low_priority_weight": 0, 01:15:36.065 "medium_priority_weight": 0, 01:15:36.065 "high_priority_weight": 0, 01:15:36.065 "nvme_adminq_poll_period_us": 10000, 01:15:36.066 "nvme_io 06:14:30 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:36.066 q_poll_period_us": 0, 01:15:36.066 "io_queue_requests": 512, 01:15:36.066 "delay_cmd_submit": true, 01:15:36.066 "transport_retry_count": 4, 01:15:36.066 "bdev_retry_count": 3, 01:15:36.066 "transport_ack_timeout": 0, 01:15:36.066 "ctrlr_loss_timeout_sec": 0, 01:15:36.066 "reconnect_delay_sec": 0, 01:15:36.066 "fast_io_fail_timeout_sec": 0, 01:15:36.066 "disable_auto_failback": false, 01:15:36.066 "generate_uuids": false, 01:15:36.066 "transport_tos": 0, 01:15:36.066 "nvme_error_stat": false, 01:15:36.066 "rdma_srq_size": 0, 01:15:36.066 "io_path_stat": false, 01:15:36.066 "allow_accel_sequence": false, 01:15:36.066 "rdma_max_cq_size": 0, 01:15:36.066 "rdma_cm_event_timeout_ms": 0, 01:15:36.066 "dhchap_digests": [ 01:15:36.066 "sha256", 01:15:36.066 "sha384", 01:15:36.066 "sha512" 01:15:36.066 ], 01:15:36.066 "dhchap_dhgroups": [ 01:15:36.066 "null", 01:15:36.066 "ffdhe2048", 01:15:36.066 "ffdhe3072", 01:15:36.066 "ffdhe4096", 01:15:36.066 "ffdhe6144", 01:15:36.066 "ffdhe8192" 01:15:36.066 ] 01:15:36.066 } 01:15:36.066 }, 01:15:36.066 { 01:15:36.066 "method": "bdev_nvme_attach_controller", 01:15:36.066 "params": { 01:15:36.066 "name": "nvme0", 01:15:36.066 "trtype": "TCP", 01:15:36.066 "adrfam": "IPv4", 01:15:36.066 "traddr": "127.0.0.1", 01:15:36.066 "trsvcid": "4420", 01:15:36.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:15:36.066 "prchk_reftag": false, 01:15:36.066 "prchk_guard": false, 01:15:36.066 "ctrlr_loss_timeout_sec": 0, 01:15:36.066 "reconnect_delay_sec": 0, 01:15:36.066 "fast_io_fail_timeout_sec": 0, 01:15:36.066 "psk": "key0", 01:15:36.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:15:36.066 "hdgst": false, 01:15:36.066 "ddgst": false, 01:15:36.066 "multipath": "multipath" 01:15:36.066 } 01:15:36.066 }, 01:15:36.066 { 01:15:36.066 "method": "bdev_nvme_set_hotplug", 01:15:36.066 "params": { 01:15:36.066 "period_us": 100000, 01:15:36.066 "enable": false 01:15:36.066 } 01:15:36.066 }, 01:15:36.066 { 01:15:36.066 "method": "bdev_wait_for_examine" 01:15:36.066 } 01:15:36.066 ] 01:15:36.066 }, 01:15:36.066 { 01:15:36.066 "subsystem": "nbd", 01:15:36.066 "config": [] 01:15:36.066 } 01:15:36.066 ] 01:15:36.066 }' 01:15:36.066 06:14:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:15:36.066 [2024-12-09 06:14:30.489277] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:15:36.066 [2024-12-09 06:14:30.489485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85139 ] 01:15:36.066 [2024-12-09 06:14:30.641846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:36.324 [2024-12-09 06:14:30.695209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:36.324 [2024-12-09 06:14:30.847031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:36.583 [2024-12-09 06:14:30.915147] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:36.842 06:14:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:36.842 06:14:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:15:36.842 06:14:31 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 01:15:36.842 06:14:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:36.842 06:14:31 keyring_file -- keyring/file.sh@121 -- # jq length 01:15:37.101 06:14:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:15:37.101 06:14:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 01:15:37.101 06:14:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:15:37.101 06:14:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:37.101 06:14:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:37.101 06:14:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:37.101 06:14:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:15:37.366 06:14:31 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 01:15:37.366 06:14:31 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 01:15:37.366 06:14:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:15:37.366 06:14:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:15:37.366 06:14:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:15:37.366 06:14:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:37.366 06:14:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:37.624 06:14:31 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 01:15:37.624 06:14:31 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 01:15:37.624 06:14:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:15:37.624 06:14:31 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 01:15:37.624 06:14:32 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 01:15:37.624 06:14:32 keyring_file -- keyring/file.sh@1 -- # cleanup 01:15:37.624 06:14:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.6MSdIzPd1L /tmp/tmp.2t9h1a6OBh 01:15:37.624 06:14:32 keyring_file -- keyring/file.sh@20 -- # killprocess 85139 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85139 ']' 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85139 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@959 -- # uname 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85139 01:15:37.625 killing process with pid 85139 01:15:37.625 Received shutdown signal, test time was about 1.000000 seconds 01:15:37.625 01:15:37.625 Latency(us) 01:15:37.625 [2024-12-09T06:14:32.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:37.625 [2024-12-09T06:14:32.212Z] =================================================================================================================== 01:15:37.625 [2024-12-09T06:14:32.212Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85139' 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@973 -- # kill 85139 01:15:37.625 06:14:32 keyring_file -- common/autotest_common.sh@978 -- # wait 85139 01:15:37.884 06:14:32 keyring_file -- keyring/file.sh@21 -- # killprocess 84889 01:15:37.884 06:14:32 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84889 ']' 01:15:37.884 06:14:32 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84889 01:15:37.884 06:14:32 keyring_file -- common/autotest_common.sh@959 -- # uname 01:15:37.884 06:14:32 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:37.884 06:14:32 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84889 01:15:38.144 killing process with pid 84889 01:15:38.144 06:14:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:38.144 06:14:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:38.144 06:14:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84889' 01:15:38.144 06:14:32 keyring_file -- common/autotest_common.sh@973 -- # kill 84889 01:15:38.144 06:14:32 keyring_file -- common/autotest_common.sh@978 -- # wait 84889 01:15:38.403 01:15:38.403 real 0m13.664s 01:15:38.403 user 0m32.006s 01:15:38.403 sys 0m3.554s 01:15:38.403 06:14:32 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:38.403 06:14:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:15:38.403 ************************************ 01:15:38.403 END TEST keyring_file 01:15:38.403 ************************************ 01:15:38.403 06:14:32 -- spdk/autotest.sh@293 -- # [[ y == y ]] 01:15:38.403 06:14:32 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:15:38.403 06:14:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:15:38.403 06:14:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:38.403 06:14:32 -- common/autotest_common.sh@10 -- # set +x 01:15:38.403 ************************************ 01:15:38.403 START TEST keyring_linux 01:15:38.403 ************************************ 01:15:38.403 06:14:32 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:15:38.403 Joined session keyring: 1003719470 01:15:38.663 * Looking for test storage... 01:15:38.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:15:38.663 06:14:33 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:15:38.663 06:14:33 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 01:15:38.663 06:14:33 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:15:38.663 06:14:33 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@345 -- # : 1 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@365 -- # decimal 1 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@353 -- # local d=1 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@355 -- # echo 1 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@366 -- # decimal 2 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@353 -- # local d=2 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@355 -- # echo 2 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:38.663 06:14:33 keyring_linux -- scripts/common.sh@368 -- # return 0 01:15:38.664 06:14:33 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:38.664 06:14:33 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:15:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:38.664 --rc genhtml_branch_coverage=1 01:15:38.664 --rc genhtml_function_coverage=1 01:15:38.664 --rc genhtml_legend=1 01:15:38.664 --rc geninfo_all_blocks=1 01:15:38.664 --rc geninfo_unexecuted_blocks=1 01:15:38.664 01:15:38.664 ' 01:15:38.664 06:14:33 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:15:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:38.664 --rc genhtml_branch_coverage=1 01:15:38.664 --rc genhtml_function_coverage=1 01:15:38.664 --rc genhtml_legend=1 01:15:38.664 --rc geninfo_all_blocks=1 01:15:38.664 --rc geninfo_unexecuted_blocks=1 01:15:38.664 01:15:38.664 ' 01:15:38.664 06:14:33 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:15:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:38.664 --rc genhtml_branch_coverage=1 01:15:38.664 --rc genhtml_function_coverage=1 01:15:38.664 --rc genhtml_legend=1 01:15:38.664 --rc geninfo_all_blocks=1 01:15:38.664 --rc geninfo_unexecuted_blocks=1 01:15:38.664 01:15:38.664 ' 01:15:38.664 06:14:33 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:15:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:38.664 --rc genhtml_branch_coverage=1 01:15:38.664 --rc genhtml_function_coverage=1 01:15:38.664 --rc genhtml_legend=1 01:15:38.664 --rc geninfo_all_blocks=1 01:15:38.664 --rc geninfo_unexecuted_blocks=1 01:15:38.664 01:15:38.664 ' 01:15:38.664 06:14:33 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bac40580-41f0-4da4-8cd9-1be4901a67b8 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=bac40580-41f0-4da4-8cd9-1be4901a67b8 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:38.664 06:14:33 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 01:15:38.664 06:14:33 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:38.664 06:14:33 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:38.664 06:14:33 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:38.664 06:14:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:38.664 06:14:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:38.664 06:14:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:38.664 06:14:33 keyring_linux -- paths/export.sh@5 -- # export PATH 01:15:38.664 06:14:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@51 -- # : 0 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:15:38.664 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:15:38.664 06:14:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:15:38.664 06:14:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:15:38.664 06:14:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:15:38.664 06:14:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:15:38.664 06:14:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:15:38.664 06:14:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@733 -- # python - 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:15:38.664 /tmp/:spdk-test:key0 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:15:38.664 06:14:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:15:38.664 06:14:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:15:38.664 06:14:33 keyring_linux -- nvmf/common.sh@733 -- # python - 01:15:38.924 06:14:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:15:38.924 /tmp/:spdk-test:key1 01:15:38.924 06:14:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:15:38.924 06:14:33 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:38.924 06:14:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85261 01:15:38.924 06:14:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85261 01:15:38.924 06:14:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85261 ']' 01:15:38.924 06:14:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:38.924 06:14:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:38.924 06:14:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:38.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:38.924 06:14:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:38.924 06:14:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:15:38.924 [2024-12-09 06:14:33.347572] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:15:38.924 [2024-12-09 06:14:33.347649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85261 ] 01:15:38.924 [2024-12-09 06:14:33.498103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:39.183 [2024-12-09 06:14:33.538632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:39.183 [2024-12-09 06:14:33.594681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:39.752 06:14:34 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:39.752 06:14:34 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:15:39.752 06:14:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:15:39.752 06:14:34 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:39.752 06:14:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:15:39.752 [2024-12-09 06:14:34.202267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:39.752 null0 01:15:39.752 [2024-12-09 06:14:34.234186] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:15:39.752 [2024-12-09 06:14:34.234519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:15:39.752 06:14:34 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:39.753 06:14:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:15:39.753 891862882 01:15:39.753 06:14:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:15:39.753 310503618 01:15:39.753 06:14:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85279 01:15:39.753 06:14:34 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:15:39.753 06:14:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85279 /var/tmp/bperf.sock 01:15:39.753 06:14:34 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85279 ']' 01:15:39.753 06:14:34 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:15:39.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:15:39.753 06:14:34 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:39.753 06:14:34 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:15:39.753 06:14:34 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:39.753 06:14:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:15:39.753 [2024-12-09 06:14:34.318336] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:15:39.753 [2024-12-09 06:14:34.318410] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85279 ] 01:15:40.012 [2024-12-09 06:14:34.473706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:40.012 [2024-12-09 06:14:34.529521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:40.581 06:14:35 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:40.581 06:14:35 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:15:40.581 06:14:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:15:40.839 06:14:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:15:40.839 06:14:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:15:40.839 06:14:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:15:41.098 [2024-12-09 06:14:35.598602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:41.098 06:14:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:15:41.098 06:14:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:15:41.356 [2024-12-09 06:14:35.845232] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:15:41.356 nvme0n1 01:15:41.356 06:14:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:15:41.356 06:14:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:15:41.356 06:14:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:15:41.356 06:14:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:15:41.356 06:14:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:41.356 06:14:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:15:41.614 06:14:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:15:41.614 06:14:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:15:41.614 06:14:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:15:41.614 06:14:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:15:41.614 06:14:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:15:41.614 06:14:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:41.614 06:14:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:41.872 06:14:36 keyring_linux -- keyring/linux.sh@25 -- # sn=891862882 01:15:41.872 06:14:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:15:41.872 06:14:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:15:41.872 06:14:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 891862882 == \8\9\1\8\6\2\8\8\2 ]] 01:15:41.872 06:14:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 891862882 01:15:41.872 06:14:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:15:41.872 06:14:36 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:15:42.131 Running I/O for 1 seconds... 01:15:43.067 13825.00 IOPS, 54.00 MiB/s 01:15:43.067 Latency(us) 01:15:43.067 [2024-12-09T06:14:37.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:43.067 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:15:43.067 nvme0n1 : 1.01 13831.97 54.03 0.00 0.00 9210.69 7474.79 18634.33 01:15:43.067 [2024-12-09T06:14:37.654Z] =================================================================================================================== 01:15:43.067 [2024-12-09T06:14:37.654Z] Total : 13831.97 54.03 0.00 0.00 9210.69 7474.79 18634.33 01:15:43.067 { 01:15:43.067 "results": [ 01:15:43.067 { 01:15:43.067 "job": "nvme0n1", 01:15:43.067 "core_mask": "0x2", 01:15:43.067 "workload": "randread", 01:15:43.067 "status": "finished", 01:15:43.067 "queue_depth": 128, 01:15:43.067 "io_size": 4096, 01:15:43.067 "runtime": 1.008822, 01:15:43.067 "iops": 13831.97432252667, 01:15:43.067 "mibps": 54.031149697369806, 01:15:43.067 "io_failed": 0, 01:15:43.067 "io_timeout": 0, 01:15:43.067 "avg_latency_us": 9210.687943230569, 01:15:43.067 "min_latency_us": 7474.78875502008, 01:15:43.067 "max_latency_us": 18634.332530120482 01:15:43.067 } 01:15:43.067 ], 01:15:43.067 "core_count": 1 01:15:43.067 } 01:15:43.067 06:14:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:15:43.067 06:14:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:15:43.326 06:14:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:15:43.326 06:14:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:15:43.326 06:14:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:15:43.326 06:14:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:15:43.326 06:14:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:15:43.326 06:14:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:43.586 06:14:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:15:43.586 06:14:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:15:43.586 06:14:37 keyring_linux -- keyring/linux.sh@23 -- # return 01:15:43.586 06:14:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:15:43.586 06:14:37 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 01:15:43.586 06:14:37 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:15:43.586 06:14:37 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:15:43.586 06:14:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:43.586 06:14:37 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:15:43.586 06:14:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:43.586 06:14:37 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:15:43.586 06:14:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:15:43.586 [2024-12-09 06:14:38.111287] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:15:43.586 [2024-12-09 06:14:38.111555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106c1d0 (107): Transport endpoint is not connected 01:15:43.586 [2024-12-09 06:14:38.112540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106c1d0 (9): Bad file descriptor 01:15:43.586 [2024-12-09 06:14:38.113538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:15:43.586 [2024-12-09 06:14:38.113564] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:15:43.586 [2024-12-09 06:14:38.113574] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:15:43.586 [2024-12-09 06:14:38.113584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:15:43.586 request: 01:15:43.586 { 01:15:43.586 "name": "nvme0", 01:15:43.586 "trtype": "tcp", 01:15:43.586 "traddr": "127.0.0.1", 01:15:43.586 "adrfam": "ipv4", 01:15:43.586 "trsvcid": "4420", 01:15:43.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:15:43.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:15:43.586 "prchk_reftag": false, 01:15:43.586 "prchk_guard": false, 01:15:43.586 "hdgst": false, 01:15:43.586 "ddgst": false, 01:15:43.586 "psk": ":spdk-test:key1", 01:15:43.586 "allow_unrecognized_csi": false, 01:15:43.586 "method": "bdev_nvme_attach_controller", 01:15:43.586 "req_id": 1 01:15:43.586 } 01:15:43.586 Got JSON-RPC error response 01:15:43.586 response: 01:15:43.586 { 01:15:43.586 "code": -5, 01:15:43.586 "message": "Input/output error" 01:15:43.586 } 01:15:43.586 06:14:38 keyring_linux -- common/autotest_common.sh@655 -- # es=1 01:15:43.586 06:14:38 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:43.586 06:14:38 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:43.586 06:14:38 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@33 -- # sn=891862882 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 891862882 01:15:43.586 1 links removed 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@33 -- # sn=310503618 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 310503618 01:15:43.586 1 links removed 01:15:43.586 06:14:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85279 01:15:43.586 06:14:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85279 ']' 01:15:43.586 06:14:38 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85279 01:15:43.586 06:14:38 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85279 01:15:43.905 killing process with pid 85279 01:15:43.905 Received shutdown signal, test time was about 1.000000 seconds 01:15:43.905 01:15:43.905 Latency(us) 01:15:43.905 [2024-12-09T06:14:38.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:43.905 [2024-12-09T06:14:38.492Z] =================================================================================================================== 01:15:43.905 [2024-12-09T06:14:38.492Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85279' 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@973 -- # kill 85279 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@978 -- # wait 85279 01:15:43.905 06:14:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85261 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85261 ']' 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85261 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:15:43.905 06:14:38 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:44.165 06:14:38 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85261 01:15:44.165 killing process with pid 85261 01:15:44.165 06:14:38 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:44.165 06:14:38 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:44.165 06:14:38 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85261' 01:15:44.165 06:14:38 keyring_linux -- common/autotest_common.sh@973 -- # kill 85261 01:15:44.165 06:14:38 keyring_linux -- common/autotest_common.sh@978 -- # wait 85261 01:15:44.425 ************************************ 01:15:44.425 END TEST keyring_linux 01:15:44.425 ************************************ 01:15:44.425 01:15:44.425 real 0m5.914s 01:15:44.425 user 0m10.451s 01:15:44.425 sys 0m1.851s 01:15:44.425 06:14:38 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:44.425 06:14:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:15:44.425 06:14:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:15:44.425 06:14:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:15:44.425 06:14:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:15:44.425 06:14:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:15:44.425 06:14:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:15:44.425 06:14:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:15:44.425 06:14:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:15:44.425 06:14:38 -- common/autotest_common.sh@726 -- # xtrace_disable 01:15:44.425 06:14:38 -- common/autotest_common.sh@10 -- # set +x 01:15:44.425 06:14:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:15:44.425 06:14:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:15:44.425 06:14:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:15:44.425 06:14:38 -- common/autotest_common.sh@10 -- # set +x 01:15:47.715 INFO: APP EXITING 01:15:47.715 INFO: killing all VMs 01:15:47.715 INFO: killing vhost app 01:15:47.715 INFO: EXIT DONE 01:15:47.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:47.975 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:15:48.234 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:15:49.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:49.174 Cleaning 01:15:49.174 Removing: /var/run/dpdk/spdk0/config 01:15:49.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:15:49.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:15:49.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:15:49.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:15:49.174 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:15:49.174 Removing: /var/run/dpdk/spdk0/hugepage_info 01:15:49.174 Removing: /var/run/dpdk/spdk1/config 01:15:49.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:15:49.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:15:49.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:15:49.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:15:49.174 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:15:49.174 Removing: /var/run/dpdk/spdk1/hugepage_info 01:15:49.174 Removing: /var/run/dpdk/spdk2/config 01:15:49.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:15:49.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:15:49.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:15:49.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:15:49.174 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:15:49.174 Removing: /var/run/dpdk/spdk2/hugepage_info 01:15:49.174 Removing: /var/run/dpdk/spdk3/config 01:15:49.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:15:49.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:15:49.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:15:49.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:15:49.174 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:15:49.174 Removing: /var/run/dpdk/spdk3/hugepage_info 01:15:49.174 Removing: /var/run/dpdk/spdk4/config 01:15:49.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:15:49.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:15:49.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:15:49.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:15:49.174 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:15:49.174 Removing: /var/run/dpdk/spdk4/hugepage_info 01:15:49.174 Removing: /dev/shm/nvmf_trace.0 01:15:49.174 Removing: /dev/shm/spdk_tgt_trace.pid56620 01:15:49.174 Removing: /var/run/dpdk/spdk0 01:15:49.174 Removing: /var/run/dpdk/spdk1 01:15:49.174 Removing: /var/run/dpdk/spdk2 01:15:49.174 Removing: /var/run/dpdk/spdk3 01:15:49.174 Removing: /var/run/dpdk/spdk4 01:15:49.174 Removing: /var/run/dpdk/spdk_pid56462 01:15:49.174 Removing: /var/run/dpdk/spdk_pid56620 01:15:49.174 Removing: /var/run/dpdk/spdk_pid56821 01:15:49.174 Removing: /var/run/dpdk/spdk_pid56902 01:15:49.434 Removing: /var/run/dpdk/spdk_pid56929 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57039 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57057 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57195 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57381 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57535 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57607 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57686 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57785 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57870 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57903 01:15:49.434 Removing: /var/run/dpdk/spdk_pid57933 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58008 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58124 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58548 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58600 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58651 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58667 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58740 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58756 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58813 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58828 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58879 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58897 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58937 01:15:49.434 Removing: /var/run/dpdk/spdk_pid58955 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59080 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59121 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59198 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59538 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59555 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59586 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59594 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59615 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59634 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59642 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59663 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59682 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59696 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59711 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59730 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59744 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59759 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59778 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59792 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59807 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59825 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59836 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59855 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59886 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59899 01:15:49.434 Removing: /var/run/dpdk/spdk_pid59929 01:15:49.434 Removing: /var/run/dpdk/spdk_pid60001 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60029 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60039 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60067 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60077 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60084 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60127 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60139 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60169 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60177 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60190 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60194 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60209 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60213 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60227 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60232 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60265 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60287 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60302 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60325 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60340 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60342 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60388 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60394 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60426 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60428 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60441 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60443 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60456 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60458 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60471 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60473 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60557 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60599 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60707 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60741 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60786 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60806 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60822 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60837 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60871 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60889 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60968 01:15:49.695 Removing: /var/run/dpdk/spdk_pid60985 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61024 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61100 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61145 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61174 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61268 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61317 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61351 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61581 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61680 01:15:49.695 Removing: /var/run/dpdk/spdk_pid61703 01:15:49.955 Removing: /var/run/dpdk/spdk_pid61738 01:15:49.955 Removing: /var/run/dpdk/spdk_pid61766 01:15:49.955 Removing: /var/run/dpdk/spdk_pid61805 01:15:49.955 Removing: /var/run/dpdk/spdk_pid61842 01:15:49.955 Removing: /var/run/dpdk/spdk_pid61870 01:15:49.955 Removing: /var/run/dpdk/spdk_pid62271 01:15:49.955 Removing: /var/run/dpdk/spdk_pid62309 01:15:49.955 Removing: /var/run/dpdk/spdk_pid62650 01:15:49.955 Removing: /var/run/dpdk/spdk_pid63112 01:15:49.955 Removing: /var/run/dpdk/spdk_pid63372 01:15:49.955 Removing: /var/run/dpdk/spdk_pid64253 01:15:49.955 Removing: /var/run/dpdk/spdk_pid65185 01:15:49.955 Removing: /var/run/dpdk/spdk_pid65308 01:15:49.955 Removing: /var/run/dpdk/spdk_pid65370 01:15:49.955 Removing: /var/run/dpdk/spdk_pid66791 01:15:49.955 Removing: /var/run/dpdk/spdk_pid67110 01:15:49.955 Removing: /var/run/dpdk/spdk_pid70451 01:15:49.955 Removing: /var/run/dpdk/spdk_pid70806 01:15:49.955 Removing: /var/run/dpdk/spdk_pid70916 01:15:49.955 Removing: /var/run/dpdk/spdk_pid71056 01:15:49.955 Removing: /var/run/dpdk/spdk_pid71079 01:15:49.955 Removing: /var/run/dpdk/spdk_pid71113 01:15:49.956 Removing: /var/run/dpdk/spdk_pid71136 01:15:49.956 Removing: /var/run/dpdk/spdk_pid71230 01:15:49.956 Removing: /var/run/dpdk/spdk_pid71371 01:15:49.956 Removing: /var/run/dpdk/spdk_pid71521 01:15:49.956 Removing: /var/run/dpdk/spdk_pid71603 01:15:49.956 Removing: /var/run/dpdk/spdk_pid71791 01:15:49.956 Removing: /var/run/dpdk/spdk_pid71869 01:15:49.956 Removing: /var/run/dpdk/spdk_pid71962 01:15:49.956 Removing: /var/run/dpdk/spdk_pid72318 01:15:49.956 Removing: /var/run/dpdk/spdk_pid72746 01:15:49.956 Removing: /var/run/dpdk/spdk_pid72747 01:15:49.956 Removing: /var/run/dpdk/spdk_pid72748 01:15:49.956 Removing: /var/run/dpdk/spdk_pid73016 01:15:49.956 Removing: /var/run/dpdk/spdk_pid73297 01:15:49.956 Removing: /var/run/dpdk/spdk_pid73694 01:15:49.956 Removing: /var/run/dpdk/spdk_pid73697 01:15:49.956 Removing: /var/run/dpdk/spdk_pid74022 01:15:49.956 Removing: /var/run/dpdk/spdk_pid74036 01:15:49.956 Removing: /var/run/dpdk/spdk_pid74060 01:15:49.956 Removing: /var/run/dpdk/spdk_pid74086 01:15:49.956 Removing: /var/run/dpdk/spdk_pid74097 01:15:49.956 Removing: /var/run/dpdk/spdk_pid74450 01:15:49.956 Removing: /var/run/dpdk/spdk_pid74504 01:15:49.956 Removing: /var/run/dpdk/spdk_pid74834 01:15:49.956 Removing: /var/run/dpdk/spdk_pid75035 01:15:49.956 Removing: /var/run/dpdk/spdk_pid75466 01:15:49.956 Removing: /var/run/dpdk/spdk_pid76017 01:15:49.956 Removing: /var/run/dpdk/spdk_pid76842 01:15:49.956 Removing: /var/run/dpdk/spdk_pid77496 01:15:49.956 Removing: /var/run/dpdk/spdk_pid77499 01:15:49.956 Removing: /var/run/dpdk/spdk_pid79533 01:15:50.215 Removing: /var/run/dpdk/spdk_pid79588 01:15:50.215 Removing: /var/run/dpdk/spdk_pid79648 01:15:50.215 Removing: /var/run/dpdk/spdk_pid79702 01:15:50.215 Removing: /var/run/dpdk/spdk_pid79824 01:15:50.216 Removing: /var/run/dpdk/spdk_pid79880 01:15:50.216 Removing: /var/run/dpdk/spdk_pid79940 01:15:50.216 Removing: /var/run/dpdk/spdk_pid79995 01:15:50.216 Removing: /var/run/dpdk/spdk_pid80361 01:15:50.216 Removing: /var/run/dpdk/spdk_pid81572 01:15:50.216 Removing: /var/run/dpdk/spdk_pid81713 01:15:50.216 Removing: /var/run/dpdk/spdk_pid81955 01:15:50.216 Removing: /var/run/dpdk/spdk_pid82572 01:15:50.216 Removing: /var/run/dpdk/spdk_pid82737 01:15:50.216 Removing: /var/run/dpdk/spdk_pid82900 01:15:50.216 Removing: /var/run/dpdk/spdk_pid82997 01:15:50.216 Removing: /var/run/dpdk/spdk_pid83167 01:15:50.216 Removing: /var/run/dpdk/spdk_pid83276 01:15:50.216 Removing: /var/run/dpdk/spdk_pid84005 01:15:50.216 Removing: /var/run/dpdk/spdk_pid84040 01:15:50.216 Removing: /var/run/dpdk/spdk_pid84081 01:15:50.216 Removing: /var/run/dpdk/spdk_pid84339 01:15:50.216 Removing: /var/run/dpdk/spdk_pid84371 01:15:50.216 Removing: /var/run/dpdk/spdk_pid84407 01:15:50.216 Removing: /var/run/dpdk/spdk_pid84889 01:15:50.216 Removing: /var/run/dpdk/spdk_pid84900 01:15:50.216 Removing: /var/run/dpdk/spdk_pid85139 01:15:50.216 Removing: /var/run/dpdk/spdk_pid85261 01:15:50.216 Removing: /var/run/dpdk/spdk_pid85279 01:15:50.216 Clean 01:15:50.216 06:14:44 -- common/autotest_common.sh@1453 -- # return 0 01:15:50.216 06:14:44 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:15:50.216 06:14:44 -- common/autotest_common.sh@732 -- # xtrace_disable 01:15:50.216 06:14:44 -- common/autotest_common.sh@10 -- # set +x 01:15:50.475 06:14:44 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:15:50.475 06:14:44 -- common/autotest_common.sh@732 -- # xtrace_disable 01:15:50.475 06:14:44 -- common/autotest_common.sh@10 -- # set +x 01:15:50.475 06:14:44 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:15:50.475 06:14:44 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:15:50.475 06:14:44 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:15:50.475 06:14:44 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:15:50.475 06:14:44 -- spdk/autotest.sh@398 -- # hostname 01:15:50.475 06:14:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:15:50.735 geninfo: WARNING: invalid characters removed from testname! 01:16:17.287 06:15:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:16:19.196 06:15:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:16:21.104 06:15:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:16:23.637 06:15:17 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:16:25.540 06:15:19 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:16:27.466 06:15:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:16:30.005 06:15:23 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:16:30.005 06:15:23 -- spdk/autorun.sh@1 -- $ timing_finish 01:16:30.005 06:15:23 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:16:30.005 06:15:23 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:16:30.005 06:15:23 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:16:30.005 06:15:23 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:16:30.005 + [[ -n 5201 ]] 01:16:30.005 + sudo kill 5201 01:16:30.015 [Pipeline] } 01:16:30.032 [Pipeline] // timeout 01:16:30.037 [Pipeline] } 01:16:30.052 [Pipeline] // stage 01:16:30.058 [Pipeline] } 01:16:30.074 [Pipeline] // catchError 01:16:30.083 [Pipeline] stage 01:16:30.086 [Pipeline] { (Stop VM) 01:16:30.099 [Pipeline] sh 01:16:30.385 + vagrant halt 01:16:33.682 ==> default: Halting domain... 01:16:40.363 [Pipeline] sh 01:16:40.645 + vagrant destroy -f 01:16:43.180 ==> default: Removing domain... 01:16:43.454 [Pipeline] sh 01:16:43.737 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 01:16:43.746 [Pipeline] } 01:16:43.758 [Pipeline] // stage 01:16:43.763 [Pipeline] } 01:16:43.775 [Pipeline] // dir 01:16:43.780 [Pipeline] } 01:16:43.791 [Pipeline] // wrap 01:16:43.796 [Pipeline] } 01:16:43.807 [Pipeline] // catchError 01:16:43.816 [Pipeline] stage 01:16:43.817 [Pipeline] { (Epilogue) 01:16:43.829 [Pipeline] sh 01:16:44.108 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:16:49.388 [Pipeline] catchError 01:16:49.390 [Pipeline] { 01:16:49.403 [Pipeline] sh 01:16:49.688 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:16:49.688 Artifacts sizes are good 01:16:49.698 [Pipeline] } 01:16:49.712 [Pipeline] // catchError 01:16:49.725 [Pipeline] archiveArtifacts 01:16:49.732 Archiving artifacts 01:16:49.850 [Pipeline] cleanWs 01:16:49.863 [WS-CLEANUP] Deleting project workspace... 01:16:49.863 [WS-CLEANUP] Deferred wipeout is used... 01:16:49.870 [WS-CLEANUP] done 01:16:49.872 [Pipeline] } 01:16:49.888 [Pipeline] // stage 01:16:49.894 [Pipeline] } 01:16:49.909 [Pipeline] // node 01:16:49.915 [Pipeline] End of Pipeline 01:16:49.959 Finished: SUCCESS